Your SlideShare is downloading. ×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Microsoft press ebook_introducing_windows_server_2012_pdf


Published on

Published in: Education, Technology, Business
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide


  • 1. IntroducingWindows Server®2012: RTM EditionMitch Tulloch with theWindows Server Team
  • 2. PUBLISHED BYMicrosoft PressA Division of Microsoft CorporationOne Microsoft WayRedmond, Washington 98052-6399Copyright © 2012 by Microsoft CorporationAll rights reserved. No part of the contents of this book may be reproduced ortransmitted in any form or by any means without the written permission of thepublisher.Library of Congress Control Number: 201944793ISBN: 978-0-7356-7535-3Printed and bound in the United States of America.First PrintingMicrosoft Press books are available through booksellers and distributors worldwide.If you need support related to this book, email Microsoft Press Book Support Please tell us what you think of this book at and the trademarks listed at are trademarks of the Microsoft group ofcompanies. All other marks are property of their respective owners.The example companies, organizations, products, domain names, email addresses, logos,people, places, and events depicted herein are fictitious. No association with any realcompany, organization, product, domain name, email address, logo, person, place, orevent is intended or should be inferred.This book expresses the author’s views and opinions. The information contained inthis book is provided without any express, statutory, or implied warranties. Neither theauthors, Microsoft Corporation, nor its resellers, or distributors will be held liable for anydamages caused or alleged to be caused either directly or indirectly by this book.Acquisitions Editor: Anne HamiltonDevelopmental Editor: Valerie WoolleyProject Editor: Valerie WoolleyEditorial Production: Diane Kohnen, S4Carlisle Publishing ServicesCopyeditor: Susan McClungIndexer: Jean SkippCover: Twist Creative . Seattle
  • 3. Contents at a GlanceIntroduction xiCHAPTER 1 The business need for Windows Server 2012 1CHAPTER 2 Foundation for building your private cloud 17CHAPTER 3 Highly available, easy-to-manage multi-server platform 85CHAPTER 4 Deploy web applications on premises and in the cloud 159CHAPTER 5 Enabling the modern workstyle 191Index 229
  • 4. vWhat do you think of this book? We want to hear from you!Microsoft is interested in hearing your feedback so we can continually improve ourbooks and learning resources for you. To participate in a brief online survey, please xiChapter 1 The business need for Windows Server 2012 1The rationale behind cloud computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Making the transition 2Cloud sourcing models 3Cloud service models 4Microsoft cloud facts 5Technical requirements for successful cloud computing. . . . . . . . . . . . . . . . 6Four ways Windows Server 2012 delivers value for cloud computing. . . 10Foundation for building your private cloud 10Highly available, easy-to-manage multi-server platform 12Deploy web applications on-premises and in the cloud 13Enabling the modern work style 14Up next. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Chapter 2 Foundation for building your private cloud 17A complete virtualization platform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19Hyper-V extensible switch 21Network Virtualization 31Improved Live Migration 37Enhanced quality of service (QoS) 45Resource metering 48
  • 5. vi ContentsIncrease scalability and performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50Expanded processor and memory support 51Network adapter hardware acceleration 54Offloaded Data Transfer (ODX) 58Support for 4 KB sector disks 59Dynamic Memory improvements 60Virtual Fibre Channel 65SMB 3 66Improved VM import 71VHDX disk format 72Business continuity for virtualized workloads. . . . . . . . . . . . . . . . . . . . . . . . 73Hyper-V Replica 73There’s more 81Up next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83Chapter 3 Highly available, easy-to-managemulti-server platform 85Continuous availability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88Failover Clustering enhancements 91SMB Transparent Failover 117Storage migration 117Windows NIC Teaming 120Chkdsk improvements 124Easy conversion between installation options 125Features On Demand 129DHCP Server Failover 129Cost efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130Storage Spaces 131Thin Provisioning and Trim 138Server for NFS data store 139Management efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
  • 6. viiContentsThe new Server Manager 141Simplified Active Directory administration 147Windows PowerShell 3.0 151Up next. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157Chapter 4 Deploy web applications on premises andin the cloud 159Scalable and elastic web platform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159NUMA-aware scalability 160Server Name Indication 163Centralized SSL certificate support 166IIS CPU throttling 172Application Initialization 175Dynamic IP Address Restrictions 176FTP Logon Attempt Restrictions 180Generating Windows PowerShell scripts using IISConfiguration Editor 183Support for open standards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186WebSocket 187Support for HTML 5 189Up next. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190Chapter 5 Enabling the modern ­workstyle 191Access virtually anywhere, from any device. . . . . . . . . . . . . . . . . . . . . . . . . 191Unified remote access 192Simplified VDI deployment 204User-Device Affinity 212Enhanced BranchCache 213Branch Office Direct Printing 214Full Windows experience. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215RemoteFX enhancements 215Enhanced USB redirection 217
  • 7. viii ContentsUser Profile Disks 218Enhanced security and compliance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221Dynamic Access Control 221BitLocker enhancements 224DNSSEC 226Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227Index 229What do you think of this book? We want to hear from you!Microsoft is interested in hearing your feedback so we can continually improve ourbooks and learning resources for you. To participate in a brief online survey, please
  • 8. ixForewordForewordWindows Server 2012 introduces a plethora of new features to address theevolved needs of a modern IT infrastructure and workforce. The coreof this ­experience is the need to scale out, virtualize, and move workloads,­applications, and ­services to the cloud. Windows Server 2012 incorporates ourexperience of building, ­managing, and operating both private and public clouds,all based on Windows Server. We used that experience to create an operatingsystem that provides organizations a scalable, dynamic, and multi-tenant-awareplatform that ­connects ­datacenters and resources globally and securely. Clouds,whether ­deployed as public or private, rely on the same technology and ­provide­consistency for ­applications, services, management, and experiences when theyare deployed in a hosted environment, in a single-server, small office, or inyour corporate ­datacenter. They are all the same, and the platform should scale­consistently and be managed easily from the small business office to the infinitelylarge public cloud.The Windows Server team employed a customer-focused design approach todesign in-the-box solutions that address customers’ real-world business ­problems.We realized that we needed to cloud-optimize environments by ­providing an­updated, flexible platform. We also knew that it was incumbent upon us to ­enableIT professionals to implement the next generation of technologies needed for­future applications and services. We focused on end-to-end solutions that arecomplete and work out of the box with the critical capabilities for the ­deploymentsneeded for the mobile and always-connected users, workforce, and devices.To achieve these goals, we carefully planned a complete virtualization platformwith flexible policies and agile options that would enable not only a high-densityand scalable infrastructure for all workloads and applications, but also enablesimple and efficient infrastructure management. Once in place, with maximizeduptime and minimized failures and downtimes, the value proposition of an openand scalable web platform that is aligned to and uses the lowest-cost commoditystorage and networking provides a comprehensive solution better than any otherplatform.In addition, Windows Server 2012 provides next-generation data security andcompliance solutions based on strong identity and authorization capabilitiesthat are paramount in this evolving cloud-optimized environment. The mobile,­work-everywhere culture demands not only compliance, but also protectionagainst the latest threats and risks.
  • 9. x ForewordAnd, last but not least, Windows Server 2012 comes with the needed ­reliability,power efficiency, and interoperability to integrate into environments withoutrequiring numerous and complex add-ons, installations, and additional softwareto have a working solution.As one of the senior engineering leaders in the Server and Cloud Division ofMicrosoft, we have an opportunity to change the world and build the WindowsServer 2012 platform to host public and private clouds all over the world. We tookour experience and learning from Hotmail, Messenger, Office 365, Bing, ­WindowsAzure, and Xbox Live . . . all of which run on Windows Server to design and ­createWindows Server 2012 so that others are capable of building their own privateclouds, hosting the latest applications, or deploying the next set of cloud serviceswith world-class results.This book is compiled from the expertise we have gained from the publicclouds that we have run for years, as well as the experience from many expertson how to use the Hyper-V and Windows Server technologies optimally. Wewanted to provide this book as a compilation of the engineering team’s inside­knowledge and best practices from early adopter deployments. It provides aunique ­introduction on how to cloud-optimize your environment with WindowsServer 2012.David B. CrossDirector of Program ManagementMicrosoft Corporation
  • 10. IntroductionWindows Server 2012 is probably the most significant release of the W­indowsServer platform ever. With an innovative new user interface, powerfulnew management tools, enhanced Windows PowerShell support, and hundredsof new features in the areas of networking, storage, and virtualization, WindowsServer 2012 can help IT deliver more while reducing costs. Windows Server 2012also was ­designed for the cloud from the ground up and provides a ­foundationfor ­building both public and private cloud solutions to enable ­businesses to takeadvantage of the many benefits of cloud computing.This book provides a technical overview of ­Windows Server 2012 and is­intended to help IT professionals familiarize themselves with the capabilities of thenew platform. This present edition also replaces the earlier preview edition, withscreenshots and feature descriptions now being based on RTM instead of Beta.Direct from the sourceA key feature of this book is the inclusion of sidebars written by members of theWindows Server team, Microsoft Support engineers, Microsoft ­Consulting ­Servicesstaff, and others who work at Microsoft. These sidebars provide an ­insider’s­perspective that includes both “under-the-hood” ­information concerning howfeatures work, and strategies, tips, and best practices from experts who have beenworking with the platform during product development. Sidebars are highlightedin the text and include the contributor’s name and title at the bottom.AcknowledgmentsThe author would like to express his special thanks to the numerous people­working at Microsoft who took time out from their busy schedules to write­sidebars for this book and/or peer-review its content to ensure technical accuracy.In recognition of their contribution towards making this book a more valuableresource, we’d like to thank the following people who work at Microsoft (unlessotherwise indicated) for contributing their time and expertise to this project:Joshua Adams, Manjnath Ajjampur, Jeff Alexander, Ted Archer, Vinod Atal,Jonathan Beckham, Jeevan Bisht, David Branscome, Kevin Broas, Brent ­Caskey,Patrick Catuncan, Al Collins, Bob Combs, Wilbour Craddock, David Cross,Kevin daCosta, Robb Dilallo (Oakwood Systems Group), Laz Diaz, Yuri Diogenes,
  • 11. xii IntroductionSean Eagan, Yigal Edery, Michael Foti, Stu Fox, ­Keith Hill, Jeff Hughes,Corey Hynes (HynesITe Inc.), Mohammed Ismail, Ron Jacob, Tomica ­Kaniski,Alex A. ­Kibkalo, Praveen Kumar, Brett Larison, Alex Lee, Ian Lindsay, Carl Luberti,Michel ­Luescher, John Marlin, John McCabe, Robert McMurray, Harsh ­Mittal,­Michael Niehaus, Symon Perriman, Tony Petito, Mark Piggott, Jason Pope,Artem ­Pronichkin, Satya Ramachandran, Ramlinga Reddy, Colin Robinson,John Roller, Luis Salazar, Stephen Sandifer (Xtreme Consulting Group Inc),Chad ­Schultz, Tom Shinder, Ramnish Singh, Don Stanwyck, Mike Stephens,Mike ­Sterling, Allen Stewart, Jeff Stokes, Chuck Swanson, Daniel Taylor,Harold Tonkin, Sen ­Veluswami, Matthew Walker, Andrew Willows, Yingwei Yang,John Yokim, Won Yoo, David Ziembicki, and Josef Zilak.If we’ve missed anyone, we’re sorry!The author also would like to thank Valerie Woolley at Microsoft ­Learning;­Diane Kohnen at S4Carlisle Publishing Services; and Susan McClung, the­copyeditor.Errata & book supportWe’ve made every effort to ensure the accuracy of this book and its companioncontent. Any errors that have been reported since this book was published arelisted on our Microsoft Press site at you find an error that is not already listed, you can report it to us through thesame page.If you need additional support, email Microsoft Press Book Support at­ note that product support for Microsoft software is not offered throughthe addresses above.We want to hear from youAt Microsoft Press, your satisfaction is our top priority, and your feedback ourmost valuable asset. Please tell us what you think of this book at:
  • 12. xiiiIntroductionThe survey is short, and we read every one of your comments and ideas.Thanks in advance for your input!Stay in touchLet’s keep the conversation going! We’re on Twitter:
  • 13. 1C H A P T E R 1The business need forWindows Server 2012■ The rationale behind cloud computing  1■ Technical requirements for successful cloud computing  6■ Four ways Windows Server 2012 delivers value for cloudcomputing  10■ Up next  15This chapter briefly sets the stage for introducing Windows Server 2012 by ­reviewing whatcloud computing is all about and why cloud computing is becoming an ­increasinglypopular solution for business IT needs. The chapter then describes how ­Windows Server 2012can provide the ideal foundation for building your organization’s private cloud.The rationale behind cloud computingCloud computing is transforming business by offering new options for businesses to ­increaseefficiencies while reducing costs. What is driving organizations to embrace the cloud ­paradigmare the problems often associated with traditional IT systems. These problems include:■■ High operational costs, typically associated with implementing and managingdesktop and server infrastructures■■ Low system utilization, often associated with non-virtualized server workloads inenterprise environments■■ Inconsistent availability due to the high cost of providing hardware redundancy■■ Poor agility, which makes it difficult for businesses to meet evolving market ­demandsAlthough virtualization has helped enterprises address some of these issues by­virtualizing server workloads, desktops, and applications, some challenges still remain.For example, mere virtualization of server workloads can lead to virtual machine (VM)sprawl, solving one problem while creating another.Cloud computing helps address these challenges by providing businesses with newways of improving agility while reducing costs. For example, by providing tools for rapiddeployment of IT services with self-service capabilities, businesses can achieve
  • 14. 2 Chapter 1 The business need for Windows Server 2012a faster time-to-market rate and become more competitive. Cloud-based solutions also canhelp businesses respond more easily to spikes in demand. And the standardized architectureand service-oriented approach to solution development used in cloud environments helpsshorten the solution development life cycle, reducing the time between envisioning anddeployment.Cloud computing also helps businesses keep IT costs under control in several ways. Forexample, the standardized architecture of cloud solutions provides greater ­transparencyand predictability for the budgeting process. Adding automation and elastic capacity­management to this helps keep operational costs lower. Reuse and re-provisioning of cloudapplications and services can help lower development costs across your organization, makingyour development cycle more cost effective. And a pay-as-you-go approach to consumingcloud services can help your business achieve greater flexibility and become more innovative,making entry into new markets possible.Cloud computing also can help businesses increase customer satisfaction by enablingsolutions that have greater responsiveness to customer needs. Decoupling applications fromphysical infrastructure improves availability and makes it easier to ensure business ­continuitywhen a disaster happens. And risk can be managed more systematically and effectively tomeet regulatory requirements.Making the transitionMaking the transition from a traditional IT infrastructure to the cloud paradigm beginswith rethinking and re-envisioning what IT is all about. The traditional approach to IT­infrastructure is a server-centric vision, where IT is responsible for procuring, designing,deploying, managing, maintaining, and troubleshooting servers hosted on the company’spremises or located at the organization’s central datacenter. Virtualization can increasethe efficiency of this approach by allowing consolidation of server workloads to increase­system utilization and reduce cost, but even a virtualized datacenter still has a server-centric­infrastructure that requires a high degree of management overhead.Common characteristics of traditional IT infrastructures, whether virtualized or not, caninclude the following:■■ Limited capacity due to the physical limitations of host hardware in the datacenter(virtualization helps maximize capacity but doesn’t remove these limitations)■■ Availability level that is limited by budget because of the high cost of redundant hosthardware, network connectivity, and storage resources■■ Poor agility because it takes time to deploy and configure new workloads­(virtualization helps speed up this process)■■ Poor efficiency because applications are deployed in silos, which means that­development efforts can’t be used easily across the organization■■ Potentially high cost due to the cost of host hardware, software licensing, and the­in-house IT expertise needed to manage the infrastructure
  • 15. The rationale behind cloud computing Chapter 1 3By contrast to the traditional server-centric infrastructure, cloud computing represents aservice-centric approach to IT. From the business customer’s point of view, cloud services canbe perceived as IT services with unlimited capacity, continuous availability, improved ­agility,greater efficiency, and lower and more predictable costs than a traditional server-centricIT infrastructure. The results of the service-centric model of computing can be increasedproductivity with less overhead because users can work from anywhere, using any capabledevice, without having to worry about deploying the applications they need to do their job.The bottom line here is that businesses considering making the transition to the cloudneed to rethink their understanding of IT from two perspectives: the type of sourcing and thekinds of services being consumed.Cloud sourcing modelsCloud sourcing models define the party that has control over how the cloud services are­architected, controlled, and provisioned. The three kinds of sourcing models for cloud­computing are:■■ Public cloud  Business customers consume the services they need from a pool ofcloud services delivered over the Internet. A public cloud is a shared cloud where thepool of services is used by multiple customers, with each customer’s environment­isolated from those of others. The public cloud approach provides the benefits ofpredictable costs and pay-as-you-go flexibility for adding or removing processing,storage, and network capacity depending on the customer’s needs.For example, Microsoft Windows Azure and Microsoft SQL Azure are public cloudofferings that allow you to develop, deploy, and run your business applications overthe Internet instead of hosting them locally on your own datacenter. By adopting this­approach, you can gain increased flexibility, easier scalability, and greater agility foryour business. And if your users only need Microsoft Office or Microsoft DynamicsCRM to perform their jobs, you can purchase subscriptions to Office 365 or MicrosoftDynamics CRM Online from Microsoft’s public cloud offerings in this area as well.For more information on Microsoft’s public cloud offerings, see■■ Private cloud  The customer controls the cloud, either by self-hosting a private cloudin the customer’s datacenter or by having a partner host it. A private cloud can beimplemented in two ways: by combining different software platforms and applications,or by procuring a dedicated cloud environment in the form of an appliance froma vendor.For example, customers have already been using the Hyper-V virtualization ­capabilitiessuccessfully in the Microsoft Windows Server 2008 R2 platform, with the MicrosoftSystem Center family of products, to design, deploy, and manage their own privateclouds. And for a more packaged approach to deploying private clouds, ­Microsoft’sPrivate Cloud Fast Track program provides customers with a standard reference
  • 16. 4 Chapter 1 The business need for Windows Server 2012­architecture for building private clouds that combines Microsoft ­software, consolidatedguidance, value-added software components, and validated ­compute, network, andstorage configurations from original equipment manufacturer (OEM) partners to ­createa turnkey approach for deploying scalable, preconfigured, ­validated ­infrastructureplatforms for deploying your own on private cloud. For more ­information on the­Private Cloud Fast Track and to see a list of Fast Track Partners,see private cloud approach allows you the peace of mind of knowing you have­complete control over your IT infrastructure, but it has higher up-front costs anda steeper implementation curve than the public cloud approach. For more informationon Microsoft’s private cloud offerings, see­server-cloud/private-cloud/. As you will soon see, however, the next generation ofHyper-V in the Windows Server 2012 platform delivers even more powerful capabilitiesthat enable customers to deploy and manage private clouds.■■ Hybrid cloud  The customer uses a combination of private and public clouds to meetthe specific needs of their business. In this approach, some of your organization’s ITservices run on-premises while other services are hosted in the cloud to save costs,simplify scalability, and increase agility. Organizations that want to make the transitionfrom traditional IT to cloud computing often begin by embracing the hybrid cloudapproach because it allows them to get their feet wet while remaining grounded in thecomfort of their existing server-centric infrastructure.One difficulty with the hybrid cloud approach, however, is the management­overhead associated with needing duplicate sets of IT controls, one set for traditional­infrastructure and others for each kind of cloud service consumed. Regardless of this,many organizations that transition to the cloud choose to adopt the hybrid approachfor various reasons, including deployment restrictions, compliance issues, or the­availability of cloud services that can meet the organization’s needs.Cloud service modelsCloud computing also can be considered from the perspective of which kinds of services arebeing consumed. The three standard service models for cloud computing are as follows:■■ Software as a service (SaaS)  This approach involves using the cloud to deliver a­single application to multiple users, regardless of their location or the kind of ­devicethey are using. SaaS contrasts with the more traditional approach of deployingseparate instances of applications to each user’s computing device. The advantagesof the SaaS model is that application activities can be managed from a single centrallocation to reduce cost and management overhead. SaaS typically is used to delivercloud-based applications that have minimal support for customization, such as email,Customer Relationship Management (CRM), and productivity software. Office 365 is anexample of a SaaS offering from Microsoft that provides users with secure anywhere
  • 17. The rationale behind cloud computing Chapter 1 5access to their email, shared calendars, instant messaging (IM), video conferencing,and tools for document collaboration.■■ Platform as a service (PaaS)  This approach involves using the cloud to deliverapplication execution services such as application run time, storage, and ­integrationfor applications that have been designed for a prespecified cloud-based ­architecturalframework. By using PaaS, you can develop custom cloud-based ­applications for yourbusiness and then host them in the cloud so that users can access them ­anywhereover the Internet. PaaS also can be used to create multi-tenant ­applications thatmultiple ­users can access simultaneously. And with its high degree of supportfor ­application-level customization, PaaS can enable integration with your older­applications and interoperability with your on-premises systems, though some­applications may need to be recoded to work in the new environment. SQL Azure isan example of a PaaS offering from Microsoft that allows businesses to provision anddeploy SQL databases to the cloud without the need of implementing and maintainingan in-house Microsoft SQL Server infrastructure.■■ Infrastructure as a service (IaaS)  This approach involves creating pools of­compute, storage, and network connectivity resources that then can be deliveredto business customers as cloud-based services that are billed on a per-usage basis.IaaS forms the foundation for SaaS and PaaS by providing a standardized, flexible­virtualized ­environment that typically presents itself to the customer as virtualizedserver ­workloads. In the IaaS model, the customer can self-provision these virtualizedworkloads and can customize them fully with the processing, storage, and network­resources needed and with the operating system and applications the business­requires. By using the IaaS approach, the customer is relieved of the need to ­purchaseand install hardware and can spin up new workloads to meet changing demand­quickly. The Hyper-V technology of the Windows Server platform, together with theSystem Center family of products, represents Microsoft’s offering in the IaaS space.Microsoft cloud factsDid you know the following facts about Microsoft’s public cloud offerings?■■ Every day, 9.9 billion messages are transmitted via Windows Live Messenger.■■ There are 600 million unique users every month on Windows Live and MSN.■■ There are 500 million active Windows Live IDs.■■ There are 40 million paid MS online services (BPOS, CRM Online, etc.) in 36 countries.■■ A total of 5 petabytes of content is served by Xbox Live each week during the holidayseason.■■ A total of 1 petabyte+ of updates is served every month by Windows Update tomillions of servers and hundreds of millions of PCs worldwide.■■ There are tens of thousands of Windows Azure customers.
  • 18. 6 Chapter 1 The business need for Windows Server 2012■■ There are 5 million LiveMeeting conference minutes per year.■■ Forefront for Exchange filters 1 billion emails per month.Technical requirements for successful cloudcomputingIf you’re considering moving your business to the cloud, it’s important to be aware of theingredients of a successful cloud platform. Figure 1-1 illustrates the three standard servicemodels for implementing private and public cloud solutions.SaaS – the softwareThe cloud provider runs the application while the customerconsumes the application as a service on a subscription basis.PaaS – the platformThe application platform includesnative services for scalability andresiliency, and the apps must bedesigned to run in the cloud.IaaS – the infrastructureThe cloud provider runs adatacenter that offers “virtualmachines for rent” along withdynamically allocated resources.Customers own the virtualmachine and manage it as “theirserver” in the cloud.FIGURE 1-1  The three standard service models for the cloud.The hierarchy of this diagram illustrates that both IaaS and PaaS can be used as the­foundation for building SaaS. In the IaaS approach, you build the entire architecture ­yourself(for example, with load-balanced web servers for the front end and clustered servers foryour business and data tiers on the back end). In fact, the only difference between IaaS anda ­traditional datacenter is that the apps are running on servers that are virtual instead of­physical.By contrast, PaaS is a completely different architecture. In a PaaS solution, like WindowsAzure, you allow Azure to handle the “physical” aspect for you when you take your app andmove it to the cloud. Then, when you have spikes in demand (think the holiday season for aretail website), the system automatically scales up to meet the demand and then scales backdown again when demand tapers off. This means that with PaaS, you don’t need to build asystem that handles the maximum load at all times, even when it doesn’t have to; instead, youpay only for what you use.But the IaaS model is much closer to what customers currently use today, so let’s focusmore closely on the IaaS service model, which often is described as “virtual machines forrent.” The two key components of IaaS are a hypervisor-based server operating system and
  • 19. Technical requirements for successful cloud computing Chapter 1 7a cloud and datacenter management solution. These two components, therefore, form thefoundation of any type of cloud solution—public, private, or hybrid.Let’s examine the first component: namely, a hypervisor-based server operating system.What attributes must such a platform have to be suitable for building cloud solutions? Thenecessary technical requirements must include the following:■■ Support for the latest server hardware and scaling features, including high-performancenetworking capabilities and reduced power consumption for green computing■■ A reliable, highly scalable hypervisor that eliminates downtime when VMs are movedbetween hosts■■ Fault-tolerant, high-availability solutions that ensure that cloud-based services can bedelivered without interruption■■ Powerful automation capabilities that can simplify and speed the provisioning andmanagement of infrastructure resources to make your business more agile■■ Support for enterprise-level storage for running the largest workloads that businessesmay need■■ The ability to host a broad range of virtualized operating systems and applications toprovide customers with choices that can best meet their business needs■■ An extensible platform with public application programming interfaces (APIs) thatbusinesses can use to develop custom tools and enhancements that they need toround out their solutions■■ The ability to pool resources, such as processing, network connectivity, and ­storage,to provide elasticity so that you can provision and scale resources dynamically in­response to changing needs■■ Self-service capabilities, so that pooled resources can be provisioned quickly accordingto service-level agreements for increased agility■■ A built-in system for monitoring resource usage, so that those consuming resourcescan be billed on a pay-for-only-what-you-use basis■■ Infrastructure transparency, so that customers can concentrate on deploying the­applications and services that they need without having to worry about the underlyinginfrastructureMicrosoft’s previous hypervisor-based server operating system, Windows Server 2008 R2,met many of these requirements to a high degree, and Microsoft and other ­enterpriseshave been using it extensively as a foundation for building both private and public clouds.As we will soon see, however, Windows Server 2012 now brings even more to the table for­building highly scalable and elastic cloud solutions, making it the first truly ­cloud-optimizedserver operating system.The second component for building a cloud is the management part, and here, ­SystemCenter 2012 provides the most comprehensive cloud and datacenter management­solution available in the marketplace. System Center 2012 spans physical, virtual, and cloud
  • 20. 8 Chapter 1 The business need for Windows Server 2012­environments using common management experiences throughout and enables end-to-endmanagement of your infrastructure and applications.Support for Windows Server 2012 will be included in Service Pack 1 for System Center2012. For more information on System Center products and to download evaluation software,see business need for Windows Server 2012Cloud computing in general, and private clouds in particular, have emerged as aresponse to the high cost and lack of agility of traditional approaches to IT. Theneeds of IT users and the rate of technological change have increased significantly.At the same time, the need to improve IT efficiency and reduce costs are high-priorityobjectives in most businesses today.Server consolidation through virtualization has been a key driver of cost­savings over the past several years. Windows Server 2012 and Hyper-V provide­significant ­improvements in scalability and availability, which enables much higher­consolidation ratios. Combined with the flexibility of unlimited VM licensing insome Windows SKUs, high-density virtualization can reduce costs significantly. WithWindows Server 2012 and Hyper-V supporting clusters up to 64 nodes runningup to 4,000 VMs and up to 1,024 active VMs per host, a relatively small amount ofphysical hardware can support a large amount of IT capability.Further improving the consolidation story is the ability to run significantly larger VMs,resulting in a higher percentage of physical servers being candidates for ­virtualization.For example, Windows Server 2012 can now support:■■ Up to 64 virtual processors per VM (with a maximum of 2,048 virtual processorsper host)■■ Up to 1 terabyte (TB) of random access memory (RAM) per VM (with up to 4 TBRAM per host)■■ Virtual hard disks (VHDs) up to 64 TB in sizeThese scalability enhancements now provide enterprises with the ability to virtualizethe vast majority of physical servers deployed today. Examples include large databaseservers or other high-scale workloads that previously could not be virtualized.In addition to scale, a substantial number of new capabilities in the Windows Server 2012and Hyper-V platform enable cloud computing scenarios. Definitions of cloud­computing vary; however, one of the most commonly utilized definitions is from theU.S. National Institutes for Standards and Technology (NIST), which defines five“essential” characteristics of cloud computing solutions, including on-demand­self-service, broad network access, resource pooling, rapid elasticity, and measured service.These attributes enable the agility and cost savings expected from cloud solutions.
  • 21. Technical requirements for successful cloud computing Chapter 1 9Virtualization alone provides significant benefits, but it does not provide allthe cloud attributes defined by NIST. A key tenet of Windows Server 2012 is to go­beyond virtualization. What this means is providing the foundational technologies and­features that enable cloud attributes such as elasticity, resource pooling, and measuredservice, while providing significant advancements in the virtualization platform.■■ For the on-demand self-service cloud attribute, Windows Server 2012 ­providesfoundational technology that enables a variety of user interfaces, including­self-service portals by providing hundreds of Windows PowerShell cmdlets­related to VM provisioning and management, that enable management solutionssuch as System Center to provide self-service user interfaces.■■ For the broad network access cloud attribute, Windows Server 2012 and ­Hyper-Vprovides new network virtualization technology that enables a ­variety of VMmobility, multi-tenancy, and hosting scenarios that remove many of today’s­network limitations. Other technologies, such as DirectAccess, enable secureremote connectivity to internal resources without the need for virtual privatenetworks (VPNs).■■ For the resource pooling cloud attribute, the combination of the operating­system, Network, and Storage virtualization technologies in Windows Server 2012­enable each component of the physical infrastructure to be virtualized andshared as a single large resource pool. Improvements to Live Migration ­enableVMs and their associated storage to be moved to any Hyper-V host in the­datacenter with a network connection. Combined, these technologies allowstandardization across the physical and virtual infrastructure with the ability ofVMs to be distributed optimally and dynamically across the datacenter.■■ For the rapid elasticity cloud attribute, Windows Server 2012 provides the ­abilityto provision VMs rapidly using technologies such as offloaded data ­transfer (ODX),which can use capabilities in storage systems to clone or ­create VMs very rapidlyto enable workload elasticity. Thin provisioning and data de-duplication enableelasticity without immediate consumption of physical resources.■■ For the measured service cloud attribute, Windows Server 2012 provides a­variety of new resource metering capabilities that enable granular reportingon resource utilization by individual VMs. Resource metering enables scenariossuch as chargeback reporting based on central processing unit (CPU) utilization,memory utilization, or other utilization-based metrics.In addition to advanced server consolidation and cloud attributes that helpdrive down IT cost and increase agility, Windows Server 2012 provides the­capability to ­reduce ongoing operational expenses (OpEx) by providing a highdegree of ­automation and the ability to manage many servers as one. A key costmetric in IT is the number of servers that an individual administrator can manage.
  • 22. 10 Chapter 1 The business need for Windows Server 2012In many datacenters, this number is small, typically in the double digits. In highlyautomated datacenters such as Microsoft’s, an individual administrator can managethousands of servers through the use of automation.Windows Server 2012 delivers this automation capability through the Server­Manager user interface’s ability to manage user-defined groups of servers as one,plus the ability of PowerShell to automate activities against a nearly ­unlimited­number of servers. This reduces the amount of administrator effort required,­enabling ­administrators to focus on higher-value activities.Taken together, the capabilities provided by Windows Server 2012 deliver the­essential cloud attributes and the foundation for significant improvements in bothIT cost and agility.David ZiembickiSenior Architect, U.S. Public Sector, Microsoft ServicesFour ways Windows Server 2012 delivers value forcloud computingLet’s now briefly look at four ways that Windows Server 2012 can deliver value for ­buildingyour cloud solution beyond what the Windows Server 2008 R2 platform can deliver. Theremaining chapters of this book will explore the powerful new features and capabilities of thiscloud-optimized operating system in more detail, along with hands-on insights from insidersat Microsoft who have developed, tested, and deployed ­Windows Server 2012 and for selectcustomers during product ­development.Foundation for building your private cloudAlthough previous versions of Windows Server have included many capabilities needed for­implementing different cloud computing scenarios, Windows Server 2012 takes this a stepfurther by providing a foundation for building dynamic, multi-tenant cloud environmentsthat can scale to meet the highest business needs while helping to reduce your ­infrastructurecosts. Hyper-V in Windows Server 2008 R2 has already helped many businesses reduce theiroperational costs through server consolidation. The next version of Hyper-V, together withother key features of Windows Server 2012, goes even further by enabling you to secure­virtualized services by isolating them effectively, migrate running VMs with no downtimeeven outside of clusters, create replicas of virtualized workloads for offsite recovery, andmuch more. The result is to provide a platform that is ideal as a foundation for building­private clouds for even the largest enterprises.
  • 23. Four ways Windows Server 2012 delivers value for cloud computing Chapter 1 11Windows Server 2012 provides your business with a complete virtualization platform thatincludes multi-tenant security and isolation capabilities to enforce network isolation betweenworkloads belonging to different business units, departments, or customers on a shared­infrastructure. Network Virtualization, a new feature of Hyper-V, lets you isolate network­traffic from different business units without the complexity of needing to ­implement andmanage virtual local area networks (VLANs). Network Virtualization also makes it easier tointegrate your existing private networks into a new infrastructure by enabling you to migrateVMs while preserving their existing virtual network settings. And network quality of service(QoS) has been enhanced in Windows Server 2012 to enable you to guarantee a minimumamount of bandwidth to VMs and virtual services so that service level agreements can beachieved more effectively and network performance can have greater predictability. ­Beingable to manage and secure network connectivity resources effectively are an important­factor when ­designing cloud solutions, and these capabilities of Windows Server 2012 make this­possible.Windows Server 2012 also helps you scale your environment better, achieve greater­performance levels, and use your existing investments in enterprise storage solutions. Withgreatly expanded support for host processors and memory, your virtualization infrastructurenow can support very large VMs that need the highest levels of performance and workloadsthat require the ability to increase significantly in scale. Businesses that have already investedin Fibre Channel storage arrays for their existing infrastructures can benefit from VirtualFibre Channel, a new feature of Hyper-V that lets you directly connect to your storage areanetwork (SAN) from within the guest operating system of your VMs. You also can use VirtualFibre Channel to virtualize any server workloads that directly access your SAN, enabling newways of reducing costs through workload virtualization. You also can cluster guest ­operatingsystems over Fibre Channel, which provides new infrastructure options you can explore.And the built-in ODX support ensures that your VMs can read and write to SAN storage atperformance levels matching that of physical hardware, while freeing up the resources on thesystem that received the transfer. With storage a key resource for any cloud solution, theseimprovements make Windows Server 2012 an effective platform for building clouds.Windows Server 2012 also provides a common identity and management framework thatsupports federation, enables cross-premises connectivity, and facilitates data ­protection.Active Directory Federation Services (AD FS) is now built into the product and ­providesa foundation for extending Active Directory identities to the cloud, allowing for single­sign-on (SSO) to resources both on-premises and in the cloud. Site-to-site VPNs can be­established to provide cross-premises connectivity between your on-premises infrastructureand ­hosting providers you purchase cloud services from. You even can connect directly toprivate ­subnets within a hosted cloud network, using your existing networking equipmentthat uses ­industry-standard IKEv2-IPsec protocols. And you can enhance business ­continuityand ­simplify disaster recovery by using the new Hyper-V Replica feature that provides­asynchronous replication of virtual machines over IP-based networks to remote sites. Allthese features help provide the foundation that you need to build your private cloud.
  • 24. 12 Chapter 1 The business need for Windows Server 2012Private Cloud(Enterprise)Multiple Business Unitson Shared InfrastructureMultiple Customerson Shared InfrastructurePublic Cloud(Hoster)• Secure Isolation Between Tenants• Dynamic Placement of Services• QoS and Resource MeteringSQL IIS SQL IIS SQL IIS SQL IISR & D Finance Contoso Bank Woodgrove BankFIGURE 1-2  Windows Server 2012 provides a foundation for multi-tenant clouds.Highly available, easy-to-manage multi-server platformCost is the bottom line for most businesses, and even though virtualization has allowedmany ­organizations to tap into efficiencies that have helped them do more with less withtheir datacenters, maintaining these efficiencies and preventing interruptions due to failures,downtimes, and management problems remain a key priority. Windows Server 2012 helpsyou address these issues by providing enhanced availability features, more flexible storage­options, and powerful new management capabilities.Windows Server 2012 enhances availability by extending the Live Migration ­capabilitiesof Hyper-V in previous Windows Server versions with a new feature called Live Storage­Migration, which lets you move VHDs while they are attached to running VMs with no­downtime. Live Storage Migration simplifies the task of migrating or upgrading storagewhen you need to perform maintenance on your SAN or file-based storage array, or whenyou need to redistribute the load. Built-in NIC teaming gives you fault-tolerant ­networking­without the need to use third-party solutions, and it also helps ensure availability by­preventing ­connectivity from being lost when a network adapter fails. And availability canbe further enhanced through transparent failover, which lets you move file shares betweencluster nodes with no interruption to applications accessing data on these shares. These­improvements can provide benefits for both virtualized datacenters and for the cloud.
  • 25. Four ways Windows Server 2012 delivers value for cloud computing Chapter 1 13Windows Server 2012 also provides numerous efficiencies that can help you ­reduce costs.These efficiencies cover a wide range of areas, including power consumption, ­networking,and storage, but for now, let’s just consider storage. The new file server ­features of ­Windows­Server 2012 allow you to store application data on server message block (SMB) file sharesin a way that provides much of the same kind of availability, ­reliability, and ­performancethat you’ve come to expect from more expensive SAN solutions. The new ­Storage Spacesfeature provides built-in storage virtualization capabilities that enable ­flexible, scalable, and­cost-effective solutions to meet your storage needs. And Windows Server 2012 ­integrateswith storage solutions that support thin provisioning with just-in-time (JIT) ­allocations of­storage and the ability to reclaim storage that’s no longer needed. ­Reducing cost is key forenterprises, whether they still have traditional IT infrastructures or have ­deployed privateclouds.Windows Server 2012 also includes features that make management and automationmore efficient. The new Server Manager takes the pain out of deploying and managing largenumbers of servers by simplifying the task of remotely deploying roles and features on bothphysical and virtual servers. Server Manager also can be used to perform ­scenario-based­deployments of the Remote Desktop Services role, for example to set up a session­virtualization infrastructure or a virtual desktop infrastructure (VDI) environment quickly.PowerShell 3.0 has powerful new features that simplify the job of automating numerousaspects of a datacenter, including the operating system, storage, and networking resources.PowerShell workflows let you perform complex management tasks that require machines tobe rebooted. Scheduled jobs can run regularly or in response to a specific event. Delegatedcredentials can be used so that junior administrators can perform mission-critical tasks. Allthese improvements can bring you closer to running your datacenter or private cloud as atruly lights-out automated environment.Deploy web applications on-premises and in the cloudThe web platform is key to building a cloud solution. That’s because cloud-based servicesare delivered and consumed over the Internet. Windows Server 2012 includes web platformenhancements that provide the kind of flexibility, scalability, and elasticity that your businessneeds to host web applications for provisioning cloud-based applications to business units orcustomers. Windows Server 2012 is also an open web platform that embraces a broad rangeof industry standards and supports many third-party platforms and tools so that you canchoose whatever best suits the development needs for your business.Because most organizations are expected to follow the hybrid cloud approach that­combines together both on-premises infrastructure and cloud services, efficiencies can begained by using development symmetry that lets you build applications that you can ­deployboth on-premises and in the cloud. Windows Server 2012 provides such development­symmetry through a common programming language supporting both Windows Server andthe Windows Azure platform; through a rich collection of applications that can be deployed
  • 26. 14 Chapter 1 The business need for Windows Server 2012and used across web application and data tiers; through the rich Microsoft Visual ­Studio–based developer experience, which lets you develop code that can run bothon-premises and in the cloud; and through other technologies like the Windows AzureConnect, which lets you configure Internet Protocol Security (IPsec)–protected connections­between your on-premises physical/virtual servers and roles running in the Windows Azurecloud.Building on the proven application platform of earlier Windows Server versions, ­WindowsServer 2012 adds new features and enhancements to enable service ­providers to hostlarge numbers of websites while guaranteeing customers predictable service ­levels. These­improvements make Windows Server 2012 the ideal platform for building and ­managing­hosting environments and public clouds. To enable the highest level of scalability, ­especiallyin shared hosting environments, Microsoft Internet Information Services (IIS) 8.0 in­Windows Server 2012 introduced multicore scaling on Non-Uniform Memory Access (NUMA),which enables servers that can scale up to 64 physical processors and across NUMA nodes.This capability enables your web applications to scale up quickly to meet ­sudden spikesin demand. And when demand falls again, IIS CPU throttling enables your ­applications toscale down to minimize costs. You also can use IIS CPU throttling to ensure that ­applicationsalways get their fair share of processor time by specifying a maximum CPU u­sage for each­application pool. And to manage the proliferation of Secure Sockets Layer (SSL) ­certificatesfor your hosting environment, or to be able to add web servers to a web farm quickly withoutthe need to configure SSL manually on them, the new Centralized SSL ­Certificate Supportfeature of Windows Server 2012 takes the headache out of managing ­SSL-based hostingenvironments.IIS 8.0 in Windows Server 2012 also provides businesses with great flexibility in the kindsof web applications that they can develop and deploy. ASP.NET 4.5 now supports the latestHTML 5 standards. PHP and MySQL also are supported through the built-in IIS extensions forthese development platforms. And support for the industry-standard WebSocket protocolenables encrypted data transfer over real-time bidirectional channels to support AJAX clientapplications running in the browser. All these features and enhancements provide flexibilityfor building highly scalable web applications, hosted either on-premises or in the cloud.Enabling the modern work styleThe consumerization of IT through the trend towards BYOD or “bring your own ­device”­environments is something that businesses everywhere are facing and IT is only ­beginning toget a handle on. The days of IT having full control over all user devices in their ­infrastructureare probably over, with the exception of certain high-security environments in the­government, military, and finance sectors. Accepting these changes requires not just newthinking but new technology, and Windows Server 2012 brings features that can help IT­address this issue by enabling IT to deliver on-premises and cloud-based services to userswhile maintaining control over sensitive corporate data.
  • 27. Up next Chapter 1 15Remote Access has been enhanced in Windows Server 2012 to make it much easierto ­deploy DirectAccess so that users can always have the experience of being ­seamlessly­connected to the corporate network whenever they have Internet access. Setting up­traditional VPN connections is also simpler in Windows Server 2012 for organizationsthat need to maintain compatibility with existing systems or policies. BranchCache hasbeen enhanced in Windows Server 2012 to make it scale greater, perform better, and be­managed more easily. Deploying BranchCache is now much simpler and enables users torun ­applications remotely and access data more efficiently and securely than before. And aspreviously mentioned in this chapter, Server Manager now lets you perform scenario-baseddeployments of the Remote Desktop Services role to implement session virtualization or VDIin your environment more easily.To remain productive as they roam between locations and use different devices, ­usersneed to be able to access their data using the full Windows experience. New featuresand ­improvements in Windows Server 2012 now make this possible from any location on­almost any device. RemoteFX for WAN enables a rich user experience even over slow WAN­connections. Universal serial bus (USB) is now supported for session virtualization, ­allowingusers to use their USB flash drives, smartcards, webcams, and other devices when ­conn­ectingto session hosts. And VDI now includes user VHDs for storing user ­personalization settingsand cached application data so that the user experience can be maintained across logons.Windows Server 2012 also gives you greater control over your sensitive corporate data tohelp you safeguard your business and meet the needs of compliance. Central access policiescan be used to define who is allowed to access information within your organization. Centralaudit policies have been enhanced to facilitate compliance reporting and forensic analysis.The Windows authorization and audit engine has been re-architected to allow the use of­conditional expressions and central policies. Kerberos authentication now supports both userand device claims. And Rights Management Services (RMS) has been made extensible sopartners can provide solutions for encrypting non-Office files. All these improvements enableusers to connect securely to on-premises or cloud-based infrastructure so that they can bemore productive in ways that meet the challenges of today’s work style while maintainingstrict control over your corporate data.Up nextThe chapters that follow will dig deeper into these different ways that Windows Server 2012can deliver value by examining in more detail the new features and capabilities of this­cloud-optimized platform. Each chapter also includes sidebars written by insiders on the­Windows Server team at Microsoft, by Microsoft Consulting Services experts in the field,and by Microsoft Support engineers who have been working with the platform from Day 1.To ­begin with, let’s look more closely at how Windows Server 2012 can provide the perfect­foundation for building your organization’s private cloud.
  • 28. 17C H A P T E R 2Foundation for buildingyour private cloud■ A complete virtualization platform  19■ Increase scalability and performance  50■ Business continuity for virtualized workloads  73■ Up next  83This chapter describes some of the new features of Windows Server 2012 thatmake it the ideal platform for building a private cloud for your organization. With­enhancements to Hyper-V virtualization, improvements in scalability and performance,and business continuity support for virtualized workloads, Windows Server 2012 providesa solid foundation for building dynamic, highly scalable multi-tenant cloud environments.Windows Server 2012: The foundation for building yourprivate cloudDelivering a solid foundation for a private cloud requires a robust­virtualization platform, scalability with great performance, and the abilityto span datacenters and integrate with other clouds. Windows Server 2012 wasdesigned to address key private cloud needs through advances in computer,storage, and Network Virtualization.Compute virtualization, provided by Hyper-V in Windows Server 2012, hasbeen improved to support significantly larger host servers and guest virtualmachines (VMs). This increases the range of workloads that can be ­virtualized.A new feature called Guest NUMA enables large virtual machines with manyvirtual CPUs (vCPUs) to achieve high performance by optimizing a VM’s vCPUmappings to the underlying physical server’s Non-Uniform Memory Access(NUMA) configuration. Large increases in Hyper-V scalability and DynamicMemory provide for much higher density of VMs per server with largerclusters. VM mobility through Live Migration and live storage migration,regardless of whether the VM is hosted on a cluster, enable a number of newscenarios for optimization of resources in private cloud scenarios.
  • 29. 18 CHAPTER 2 Foundation for building your private cloudWindows Server 2012 delivers new Network Virtualization capability as well as­private virtual local area networks (VLANs), opening a number of new ­networkingscenarios, including multi-tenant options required for hosting and private cloud scenarios.These technologies enable a tenant to utilize their own IP ­addressing schemes, evenif it overlaps with other tenants, while maintaining separation and security. Win-dows Server 2012 also introduces a new extensible virtual switch. The extensibleswitch delivers new capabilities such as port profiles and is a platform that thirdparties can use to build switch extensions for tasks like traffic ­monitoring, intru-sion detection, and network policy enforcement. In both private cloud ­scenariosand hosting scenarios, secure multi-tenancy is often a requirement. Examplescould include separating the finance department’s resources from the engineeringdepartment’s resources or separating one company’s resource you are hosting fromanother’s. Windows Server 2012 networking technologies provide for shared infra-structure and resource pooling while enabling secure multi-tenancy.Storage virtualization is a major investment area in Windows Server 2012. ­StorageSpaces, SMB 3, Cluster Shared Volumes (CSV2), and several other new storage­features provide a high-performance, low-cost storage platform. This storageplatform allows Hyper-V VMs to be run from Windows Server 2012 continuouslyavailable file shares on Windows storage spaces. Such shares can be accessed­using the new SMB 3 protocol, which when combined with appropriate network­hardware, provides high-speed, low-latency, multichannel-capable storage access.These technologies provide a robust storage platform at a cost point much lowerthan was previously possible. For environments with significant existing investmentsin storage area network (SAN) technology, Windows Server 2012 now enables FibreChannel host bus adapters (HBAs) to be virtualized, allowing VMs direct access toFibre Channel–based SAN storage.Another critical component of a private cloud infrastructure is disaster recovery­capability. Windows Server 2012 introduces the Hyper-V Replica feature, whichallows VMs to be replicated to disaster recovery sites, which reduces the time­required to restore service should a primary datacenter suffer a disaster.With the large number of new features and improvements, automation becomesa critical requirement, both for consistency of deployment and for efficiency in­operations. Windows Server 2012 includes about 2,400 new Windows PowerShellcmdlets for managing the various roles and features in the platform. WindowsPowerShell can be used either directly or through Microsoft and third-party­management systems to automate deployment, configuration, and operationstasks. The new Server ­Manager in Windows Server 2012 allows multiple ­servers to begrouped and managed as one. The objective of these improvements is to increaseadministrator efficiency by increasing the number of servers each ­administrator canmanage.
  • 30. A complete virtualization platform CHAPTER 2 19The range of technology delivered in Windows Server 2012 can be used in a ­varietyof ways to enable private cloud scenarios. For a large, centralized enterprise,­large-scale file and Hyper-V clusters can deliver a platform able to run thousands ortens of thousands of highly available VMs. For cases where secure multi-tenancy isrequired, Network Virtualization and private VLANs can be used to deliver secureand isolated networks for each tenant’s VMs. With continuously available file sharesfor storing VMs combined with Live Migration and Live Storage Migration, VMs canbe moved anywhere in the datacenter with no downtime.The compute, network, and storage virtualization provided by Windows Server 2012deliver resource pooling, elasticity, and measured service cloud attributes. Thesecapabilities are further improved by disaster recovery and automation technologies.With these and other features, Windows Server 2012 delivers the foundation for theprivate cloud.David ZiembickiSenior Architect, U.S. Public Sector, Microsoft ServicesA complete virtualization platformVirtualization can bring many benefits for businesses, including increased agility, greaterflexibility, and improved cost efficiency. Combining virtualization with the infrastructureand tools needed to provision cloud applications and services brings even greater benefitsfor ­organizations that need to adapt and scale their infrastructure to meet the ­changingdemands of today’s business environment. With its numerous improvements, Hyper-V in­Windows Server 2012 provides the foundation for building private clouds that can usethe benefits of cloud computing across the business units and geographical locationsthat ­typically make up today’s enterprises. By using Windows Server 2012, you can begin­transitioning your organization’s datacenter environment toward an infrastructure as aservice (IaaS) private cloud that can provide your business units with the “server instanceson ­demand” capability that they need to be able to grow and respond to changing marketconditions.Hosting providers also can use Windows Server 2012 to build multi-tenant cloud­infrastructures (both public and shared private clouds) that they can use to deliver­cloud-based applications and services to customers. Features and tools included in WindowsServer 2012 enable hosting providers to fully isolate customer networks from one another,deliver ­support for service level agreements (SLAs), and enable chargebacks for implementing­usage-based customer billing.Let’s dig into these features and capabilities in more detail. We’ll also get some insiderperspective from experts working at Microsoft who have developed, tested, deployed, andsupported Windows Server 2012 during the early stages of the product release cycle.
  • 31. 20 CHAPTER 2 Foundation for building your private cloudScenario-focused design in Windows Server 2012One of the best things about Windows Server 2012 is that it was designedfrom the ground up, with a great focus on actual customer scenarios.­Windows Server is the result of a large engineering effort, and in past releases,each ­organization delivered its own technology innovations and roadmap in its­respectively relevant area. The networking team would build great ­networkingfeatures; the storage team would innovate on file and storage systems; the­manageability team would introduce Windows PowerShell to enable a standard wayto ­manage servers, and so on.Windows Server 2012 is different. Instead of having vertical technology-focusedroadmaps and designs, it was built around specific customer scenarios for theserver. I was the scenario leader for the “hosted cloud” scenario, which was allabout building the most cloud-optimized operating system ever built and ­aligning­multiple feature crews on enabling enterprises and hosting providers to buildclouds that are better than ever.Scenario-focused design starts by understanding the business need and the realcustomer pain points and requirements. During the planning phase, we talked toa very long list of customers and did not limit ourselves to any specific technology.Instead, we have framed the discussion around the need to build and run cloudsand discovered pain points, such as the need to offer secure multi-tenancy and­isolation to your cloud tenants, so that hosting providers can be more efficient in­utilizing their infrastructure and lowering their cost. There’s also a need to be able to­automate manual processes end to end because manual processes just don’t cut itanymore, and the need to lower the cost of storage because ­customers were clearlyoverpaying for very expensive storage even when they don’t really need it. We thentranslated that understanding into investments that cross ­technology ­boundaries thatwill solve those business problems and satisfy the customer ­requirements.For example, to enable multi-tenancy, we didn’t just add some access control lists (ACLs)on the Hyper-V switch. Instead, we’ve built a much better Hyper-V switch with ­isolationpolicy support and added Network Virtualization to decouple the ­physical cloud­infrastructure from the VM networks. Then we added quality of service (QoS) ­policies tohelp hosting providers ensure proper SLAs for different tenants and resource ­meters toenable them to measure and charge for activities, and we also ensured that ­everythingwill be fully automatable (via Windows PowerShell, of course), in a consistent way.Here’s another example: we didn’t just add support for a new network interfacecard (NIC) technology called Remote Direct Memory Access (RDMA). Instead, we’vedesigned it to work well with file servers and provide SMB Direct support to enablethe use of file servers in a cloud infrastructure over standard Ethernet fabric, and
  • 32. A complete virtualization platform CHAPTER 2 21­used storage spaces for low-cost disks. This way, competitive performance­compared to SANs is made available at a fraction of the cost.Finally, scenario-focused design doesn’t actually end at the design phase. It’s a wayof thinking that starts at planning but continues all the way through execution,­internal validation, external validation with our TAP program, partner relations,­documentation, blogging, and, of course, bringing the product to market. ­Basically,at every stage of the Windows Server 2012 execution cycle, the focus was on­making the scenario work, rather than on making specific features work.This kind of a scenario-focused requires an amazingly huge collaborative effortacross technology teams. This is exactly where Windows Server 2012 shines andis the reason you’re seeing all of these great innovations coming together in one­massive release that will change the way clouds are built.Yigal EderyPrincipal Program Manager, Windows ServerHyper-V extensible switchThe new Hyper-V extensible switch in Windows Server 2012 is key to enabling the creationof secure cloud environments that support the isolation of multiple tenants. The ­Hyper-V­extensible switch in Windows Server 2012 introduces a number of new and enhanced­capabilities for tenant isolation, traffic shaping, protection against malicious virtual machines,and hassle-free troubleshooting. The extensible switch allows third parties to develop plug-inextensions to emulate the full capabilities of hardware-based switches and support morecomplex virtual environments and solutions.Previous versions of Hyper-V allowed you to implement complex virtual network­environments by creating virtual network switches that worked like physical layer-2 Ethernetswitches. You could create external virtual networks to provide VMs with connectivity withexternally located servers and clients, internal networks to allow VMs on the same host tocommunicate with each other as well as the host, or private virtual networks (PVLANs) thatyou can use to completely isolate all VMs on the same host from each other and allow themto communicate only via external networks.The Hyper-V extensible switch facilitates the creation of virtual networks that canbe ­implemented in various ways to provide great flexibility in how you can design your­virtualized infrastructure. For example, you can configure a guest operating system withina VM to have a single virtual network adapter associated with a specific extensible switchor multiple virtual network adapters (each associated with a different switch), but you can’t­connect the same switch to multiple network adapters.What’s new however is that the Hyper-V virtual switch is now extensible in a couple ofdifferent ways. First, you can now install custom Network Driver Interface Specification (NDIS)filter drivers (called extensions) into the driver stack of the virtual switch. For example, you
  • 33. 22 CHAPTER 2 Foundation for building your private cloudcould create an extension that captures, filters, or forwards packets to extensible switch ports.Specifically, the extensible switch allows for using the following kinds of extensions:■■ Capturing extensions, which can capture packets to monitor network traffic but ­cannotmodify or drop packets■■ Filtering extensions, which are like capturing extensions but also can inspect and droppackets■■ Forwarding extensions, which allow you to modify packet routing and enable­integration with your physical network infrastructureSecond, you can use the capabilities of the Windows Filtering Platform (WFP) by usingthe built-in Wfplwfs.sys filtering extension to intercept packets as they travel along the datapath of the extensible switch. You might use this approach, for example, to perform packet­inspection within your virtualized environment.These different extensibility capabilities of the Hyper-V extensible switch are intendedprimarily for Microsoft partners and independent software vendors (ISVs) so they can updatetheir existing network monitoring, management, and security software products so theycan work not just with physical hosts, but also with VMs deployed within any kind of virtual­networking environment that you might possibly create using Hyper-V in Windows Server2012. In addition, being able to extend the functionality of the Hyper-V ­networking by­adding extensions makes it easier to add new networking functionality to Hyper-V ­withoutneeding to replace or upgrade the switch. You’ll also be able to use the same tools formanaging these extensions that you use for managing other aspects of Hyper-V ­networking,namely the Hyper-V Manager console, Windows PowerShell, and Windows Management­Instrumentation (WMI). And because these extensions integrate into the existing frameworkof Hyper-V networking, they automatically work with other capabilities, like Live Migration.Table 2-1 summarizes some of the benefits of the Hyper-V extensible switch from both theIT professional and ISV perspective.TABLE 2-1  Benefits of the Hyper-V extensible switchKey Tenets Benefit to ISVS Benefit to IT ProfessionalsOpen platform w/public API Write only the functionalitiesdesiredMinimal footprint for errorsFirst-class citizen of system Free system services (e.g., LiveMigration)Extensions from various ISVs work­togetherExisting API model Faster development Larger pool of extension implementersLogo certification and richframeworkHigher customer satisfaction Higher extension qualityUnified Tracing thru virtualswitchLower support costs Shorter downtimes
  • 34. A complete virtualization platform CHAPTER 2 23Configuring virtual switchesFigure 2-1 shows the Windows Filtering Platform (WPF) extension selected in the VirtualSwitch Manager of the Hyper-V Console in Windows Server 2012. Note that once ­extensionsare installed on the host, they can be enabled or disabled and also have their order­rearranged by moving them up or down in the list of switch extensions.FIGURE 2-1  Virtual switch extensions for the Hyper-V extensible switch.You can also use Windows PowerShell to create, delete, and configure extensible switcheson ­Hyper-V hosts. For example, Figure 2-2 shows how to use the Get-VMSwitchExtensioncmdlet to display details concerning the extensions installed on a specific switch.
  • 35. 24 CHAPTER 2 Foundation for building your private cloudFIGURE 2-2  Displaying all extensions installed on the virtual switch named CONTOSO.You also can display the full list of Windows PowerShell cmdlets for managing the­extensible switch, as Figure 2-3 illustrates.FIGURE 2-3  Displaying all Windows PowerShell cmdlets for managing virtual switches.Troubleshooting virtual switchesMicrosoft also has extended Unified Tracing through the Hyper-V extensible switch, whichmakes it easier for you to diagnose problems that may occur. For example, if you are­experiencing issues that you think might be connected with the extensible switch, you couldattempt to troubleshoot the problem by turning on tracing using the Netsh commandlike this:netsh trace start provider=Microsoft-Windows-Hyper-V-VmSwitch capture=yescapturetype=vmswitch
  • 36. A complete virtualization platform CHAPTER 2 25Then you would try and reproduce the issue while tracing is turned on. Once a repro hasoccurred, you could disable tracing with netsh trace stop and then review the generatedEvent Trace Log (ETL) file using Event Viewer or Network Monitor. You also could review theSystem event log for any relevant events.Performance monitoring improvementsWindows Server 2012 exposes more Event Tracing for Windows (ETW) dataproviders and performance items than Windows Server 2008 R2. With thisexposure comes the vital need for the IT professional to know which datasets arerelevant to their specific monitoring situation. It’s not feasible nor appropriate tojust gather everything, for system monitoring has in it a touch of physics . . .a modified Heisenberg uncertainty principle is afoot; One cannot monitor a systemwithout impacting it to some degree. To how much of a degree is at question. Finelytuned data collector sets by Performance Analysis of Logs (PAL; see can be used by the IT professional to ensure they are onlygathering the data necessary to their problem set, so as to not negatively impactsystem performance too heavily while monitoring or baselining systems.One advantage to using ETW data providers rather than performance counterobject items is that ETW providers come from the kernel itself typically, rather thancoming from user mode measurements. What this means is that the data from ETWdata providers is more accurate and more reliable and also puts a lower load on thesystem. ETW logging is unlikely to suffer from missing data sets due to high systemload as well. Look for guidance on which items to collect though before diving in;ETL tracing can grow log files quickly.Jeff StokesPlatforms PFEAdditional capabilitiesA number of other advanced capabilities also have been integrated by Microsoft into theHyper-V extensible switch to help enhance security, monitoring, and troubleshooting­functionality. These additional capabilities include the following:■■ DHCP guard  Helps safeguard against Dynamic Host Configuration Protocol (DHCP)man-in-the-middle attacks by dropping DHCP server messages from unauthorizedVMs pretending to be DHCP servers■■ MAC address spoofing  Helps safeguard against attempts to use ARP spoofing tosteal IP addresses from VMs by allowing VMs to change the source MAC address inoutgoing packets to an address that is not assigned to them
  • 37. 26 CHAPTER 2 Foundation for building your private cloud■■ Router guard  Helps safeguard against unauthorized routers by dropping router­advertisement and redirection messages from unauthorized VMs pretending to be routers■■ Port mirroring  Enables monitoring of a VM’s network traffic by forwarding copies ofdestination or source packets to another VM being used for monitoring purposes■■ Port ACLs  Helps enforce virtual network isolation by allowing traffic filtering basedon media access control (MAC) or IP address ranges■■ Isolated VLANs   Allows segregation of traffic on multiple VLANs to facilitate­isolation of tenant networks through the creation of private VLANs (PVLANs)■■ Trunk mode  Allows directing traffic from a group of VLANs to a specific VM■■ Bandwidth management   Allows guaranteeing a minimum amount of bandwidthand/or enforcing a maximum amount of bandwidth for each VM■■ Enhanced diagnostics   Allows packet monitoring and event tracing through theextensible switch using ETL and Unified TracingMost of these additional capabilities can be configured from the graphical user interface(GUI) by opening the VM’s settings. For example, by selecting the network adapter underHardware, you can specify bandwidth management settings for the VM. Figure 2-4 showsthese settings configured in such a way that the VM always has at least 50 MBps of networkbandwidth available, but never more than 100 MBps. If your hosts reside in a shared cloudbeing used to provision applications and services to business units or customers, these newbandwidth management capabilities can provide the benefit of helping you meet your SLAswith these business units or customers.FIGURE 2-4  Minimum and maximum bandwidth settings have been configured for this VM.
  • 38. A complete virtualization platform CHAPTER 2 27Clicking the plus sign (+) beside Network Adapter in these settings exposes two new pages ofnetwork settings: Hardware Acceleration and Advanced Features. We’ll examine the ­HardwareAcceleration settings later in this chapter, but for now, here are the Advanced Features­settings which lets you configure MAC address spoofing, DHCP guard, router guard, portmirroring and NIC teaming for the selected network adapter of the VM, as shown in Figure 2-5.As the sidebar demonstrates, you also can use Windows PowerShell to configure and­manage the various advanced capabilities of the Hyper-V extensible switch.FIGURE 2-5  Configuring advanced features for network adapter settings for a VM.Using Windows PowerShell to configure the extensible switchLet’s briefly look at two scenarios where Windows PowerShell can be used toconfigure­various features of the extensible network switch.Scenario 1: Enabling advanced networking featuresIn an upgrade scenario, you want to take advantage of advanced networking­features of the extensible network switch. Namely, you want to enable the ­followingon all VMs on a Hyper-V host:■■ DHCP Guard
  • 39. 28 CHAPTER 2 Foundation for building your private cloud■■ Enable router advertisement guard■■ Enable Virtual Machine Queue (VMQ)Here’s what a VM looks like without any of the advanced networking features enabled:Now let’s do this on a Hyper-V host on every single VM on the Hyper-V host.First, let’s list all the VMs by issuing the Get-VM cmdlet:
  • 40. A complete virtualization platform CHAPTER 2 29We have four VMs on this host. Let’s activate DHCP Guard, router advertisementguard, and VMQ in a single line:Once the Windows PowerShell prompt has returned, we can view the settings onany VM on this host:
  • 41. 30 CHAPTER 2 Foundation for building your private cloudNote: to do this in a Hyper-V cluster, simply prepend the previous statement withGet-ClusterGroup:Scenario 2: Configure ACLs on a VMMost organizations have a management network segment and will typically­associate a physical NIC on the management network segment. Suppose you wantto limit the network segment associated with the virtual NIC connected to the­management network. Here’s how you’d create an ACL to accomplish this:This cmdlet allows both inbound and outbound traffic to the VM named wds02from the segment. To view the settings:Adiy QasrawiConsultant, Microsoft Consulting Services (MCS)
  • 42. A complete virtualization platform CHAPTER 2 31Learn moreIT pros can expect Microsoft partners and ISVs to take advantage of the extensible switchcapabilities of Hyper-V in Windows Server 2012 as new versions of their network monitoring,management, and security products begin to appear. For example:■■ Cisco Systems has announced that its Cisco Nexus 1000V distributed virtual switch willenable full VM-level visibility and security controls in Hyper-V environments; see■■ inMon Corp. has announced that their sFlow traffic monitoring software will ­delivercomprehensive visibility into network and system resources in Hyper-V virtual­environments; see■■ 5nine Software has announced that version 3.0 of 5nine Security Manager will be thefirst completely host-based Virtual Firewall with Anti-Virus (AV) for Windows 8; see an overview of the requirements, implementation, and manageability of the Hyper-Vextensible switch, see the topic “Hyper-V Virtual Switch Overview” in the TechNet Library at For additional overviews of theHyper-V extensible switch, see the topic “Hyper-V Virtual Switch Overview,” at and the post “IntroducingHyper-V Extensible Switch,” on the Server & Cloud Blog at­server-cloud/archive/2011/11/08/windows-server-8-introducing-hyper-v-extensible-switch.aspx.For a detailed overview of how the Hyper-V extensible switch operates and how to writeextensions for the switch, see the topic “Hyper-V Extensible Switch” in the Windows HardwareDevelopment section of MSDN at a sample base library that can be used to implement a filter driver for the Hyper-V­extensible switch, see the topic “Hyper-V Extensible Virtual Switch extension filter driver” inthe Samples section of Dev Center—Hardware on MSDN at more information on Windows PowerShell cmdlets like Get-VMSwitch,Get-VMSwitchExtension, Set-VMSwitchExtensionSwitchFeature, and other cmdlets for­configuring and managing the Hyper-V extensible switch, see “Hyper-V Cmdlets in WindowsPowerShell” in the TechNet Library at VirtualizationAs discussed in Chapter 1, “The business need for Windows Server 2012,” in the IaaS cloudcomputing model, the cloud provider runs a datacenter that offers “VMs for rent” along withdynamically allocated resources. The customer owns the VM and manages it as “its server”in the cloud. The meaning of the terms cloud provider and customer can differ, of course,
  • 43. 32 CHAPTER 2 Foundation for building your private clouddepending on whether you’re talking about a shared private cloud or a shared public cloud.Specifically, the following points apply:■■ In the shared private cloud scenario, the cloud provider is the organization itself, whichowns and operates its own datacenter, whereas the customers might be different­business units, departments, or offices in different locations.■■ In the shared public cloud scenario, the cloud provider is the hosting company,whereas the customers might be large enterprises, mid-sized companies, or even small­businesses. The hosting company owns and manages the datacenter and may “rentout” servers to customers, offer colocation of customer-owned servers, or both.In both scenarios, the cloud provider can provide the numerous benefits of cloud­computing to its customers, but typically not without problems using today’s technologies.For example, VLANs are typically used by cloud providers to isolate the servers belonging toone customer from those belonging to other customers and provisioned from the same cloud.VLANs accomplish this by adding tags to Ethernet frames. Then Ethernet switches can beconfigured to enforce isolation by allowing nodes that have the same tag to ­communicate witheach other, but not with nodes having a different tag. But VLANs have several ­limitations:■■ They have limited scalability because typical Ethernet switches support no more than1,000 VLAN IDs (with a theoretical maximum of 4,094).■■ They have limited flexibility because a single VLAN can’t span multiple IP subnets.■■ They have high management overhead associated with them because Ethernet­switches need to be reconfigured each time a VLAN is added or removed.Another problem that customers often experience when contemplating moving theircomputing resources to the cloud is IP addressing. The issue is that the customer’s existinginfrastructure typically has one addressing scheme, whereas the datacenter network has anentirely different addressing scheme. So when a customer wants to move one of its serversinto the cloud, typically by virtualizing the workload of the existing physical server so theworkload can be run as a VM hosted within the cloud provider’s datacenter, the customer isusually required to change the IP address of their server so it can fit the addressing schemeof the cloud provider’s network. This can pose difficulties, however, because IP addresses areoften tied to geographical locations, management policies, and security policies, so changingthe server’s address when its workload is moved into the cloud may result in routing issues,servers moving out of management scope, or security policies failing to be applied properly.It would simplify cloud migrations a lot if the customer’s servers could keep their ­existingIP addresses when their workloads are virtualized and moved into the cloud provider’s­datacenter. That way, the customer’s existing routing, management, and security policiesshould continue to work as before. And that’s exactly what Network Virtualization does!How Network Virtualization worksNetwork Virtualization is a new feature in Windows Server 2012 that lets you keep yourown internal IP addresses when moving your servers into the cloud. For example, let’s say
  • 44. A complete virtualization platform CHAPTER 2 33that you have three on-premises physical servers having private IP addresses,, and, and you want to move these servers to the datacenter ofa cloud provider called Fabrikam. These servers are currently in the addressspace, and Fabrikam’s datacenter uses for its datacenter network’s address space.If Fabrikam has Windows Server 2012 deployed in its datacenter, you’re in luck because yourservers can keep their existing IP addresses when their workloads are migrated into VMsrunning on Fabrikam host machines. This means that your existing clients, which are used toaccessing servers located on the subnet, will be able to continue doing so withno modifications needed to your routing infrastructure, management platform, or networksecurity policies. That’s Network Virtualization at work.But what if another customer of Fabrikam uses the exact same subnetting scheme forits own virtualized workloads? For example, let’s say that Northwind Traders also has been­using on its private network, and one of the servers it’s moved into Fabrikam’s­datacenter has the exact same IP address ( as one of the servers that you’vemoved into Fabrikam’s datacenter? No problem! Network Virtualization in Windows Server2012 provides complete isolation between VMs belonging to different customers even ifthose VMs use the exact same IP addresses!Network Virtualization works by allowing you to assign two different IP addresses to eachVM running on a Windows Server 2012 Hyper-V host. These two addresses are:■■ Customer address (CA)  The IP address that the server had when it resided on thecustomer’s premises before it was migrated into the cloud. In the previous ­example,this might be the address for a particular server that the ­customer wantsto move to the cloud.■■ Provider address (PA)  The IP address assigned by the cloud provider to the serveronce the server has been migrated to the provider’s datacenter. In the previous example,this could be, or some other address in the address space.From the customer’s perspective, communication with the migrated server is just the sameas if the server still resided on the customer’s own premises. This is because the VM ­runningthe customer’s migrated workload can see and use its customer address and thus can bereached by other hosts on the customer’s network. The VM cannot see or use its provideraddress, ­however, because this address is visible only to the hosts on the cloud provider’snetwork.Network Virtualization thus lets the cloud provider run multiple virtual networks ontop of a single physical network in much the same way as server virtualization lets yourun multiple virtual servers on a single physical server. Network Virtualization also isolateseach virtual network from every other virtual network, with the result that each virtualnetwork has the illusion that it is a separate physical network. This means that two ormore virtual networks can have the exact same addressing scheme, yet the networks willbe fully isolated from one another and each will function as if it is the only network withthat scheme.
  • 45. 34 CHAPTER 2 Foundation for building your private cloudTo make this all happen, Network Virtualization needs a way of virtualizing IP addressesand mapping them to physical addresses. Network Virtualization in Windows Server 2012­offers two ways of accomplishing this:■■ Network Virtualization Generic Routing Encapsulation (NVGRE)  In this­approach, all the VM’s packets are encapsulated with a new header before they aretransmitted onto the physical network. NVGRE requires only one PA per host, which isshared by all VMs on that host.■■ IP rewrite  This approach modifies the customer addresses of packets while they arestill on the VM and before they are transmitted onto the physical network. IP rewriterequires a one-to-one mapping of customer addresses to provider addresses.NVGRE is compatible with today’s datacenter network hardware infrastructure and is therecommended approach for implementing Network Virtualization.Because Network Virtualization is intended for datacenters, implementing it requires thatyou have a VM management framework in place. System Center Virtual Machine Manager2012 Service Pack 1 provides such a framework and lets you use Windows PowerShell or WMIto create and manage virtual networks.Benefits of Network VirtualizationNetwork Virtualization is key to being able to build and provision multi-tenant cloud­services, both for shared private clouds, where the “customers” are ­different business unitsor ­departments, and for public cloud scenarios, where the cloud provider offers “space torent” to all comers. Network Virtualization lets you create multi-tenant networks whereeach ­network is fully isolated from all other ­networks, and it does this without any of the­limitations of or overhead ­associated with the job of creating and managing VLANs. Thismeans that cloud ­providers can use Network Virtualization to create as many networks asyou want—­thousands and thousands of them for example if you are a large ­hosting­provider—and then move workloads anywhere you want without having to ­perform thearduous (and error-prone) task of reconfiguring VLANs.Network Virtualization also provides greater flexibility for VM placement, which helpsreduce overprovisioning and fragmentation of resources for the cloud ­provider. By enablingdynamic VM placement, the cloud provider can make best use of the compute, network, andstorage resources within their datacenter and can monitor and control the provisioning ofthese resources more easily.Regardless of whether you are a customer looking to migrate your server workloads intothe cloud, an enterprise seeking to implement a shared private cloud for provisioning ­“serversfor rent” to different divisions or locations, or a hosting provider wanting to offer cloudhosting services to large numbers of ­customers, Network Virtualization in Windows Server2012 provides the foundation for achieving your goals. Table 2-2 summarizes the benefits ofNetwork ­Virtualization to these different parties.
  • 46. A complete virtualization platform CHAPTER 2 35TABLE 2-2  The benefits that Network Virtualization can provide to customers, enterprises, and hostingprovidersOwner BenefitsThe customer who owns the workloadthat needs to be moved into the cloudSeamless migration to the cloudEasy to move your three-tier topology to the cloudAn enterprise seeking to deploya shared private cloudEasy cloud burstingPreserve your VM settings, IP addresses, and policiesCross premises server-to-server connectivityA hosting provider wanting to offersecure, ­multi-tenant “servers for rent”using a shared public cloudFlexible VM placement requiring no network reconfigurationCreate and manage large number of tenant networksNetwork Virtualization operational challengesThe Network Virtualization capabilities found in Windows Server 2012 provide afresh approach to an old problem, and that is primarily that of operator density.Operators, or service providers, are no longer interested in 1:1 solutions. They wantmore virtual servers per physical server today the same way they wanted moresubscribers for a given pool of dial-up modems at the early of days of the WorldWide Web. Density typically came at the price of mobility and scalability. Today, ofcourse, this is less of an issue, at least in datacenter virtualization scenarios, as wecan have density pushing the limits of hardware while maintaining mobility andscalability.There was always one difficult problem to solve: how to extend the mobility andscalability of a single datacenter to two or more. This was often required eitheras datacenters ran out of space or often after a merger or acquisition. Nearlyevery customer I have worked with in the past five years has or had a datacenter­relocation or consolidation project. The problem was now: how do I move theseservers to new datacenters while maintaining all the monitoring and security­policies associated with their location? The answer usually consisted of storage andnetwork architects sitting down and installing new network and storage equipmentwhich really extended the network subnet(s) from one datacenter to another. This,in a way, was the precursor to Network Virtualization, and we were able to learna lot from this. Especially with respect to the newer problems we discovered as aresult. Some of the problems we discovered included:■■ Application behavior  Moving the VM from one datacenter to another ­typicallyintroduced network latency. Some applications just did not behave well with theadded latency.■■ Supportability  It was now difficult for datacenter technicians to effectivelyknow which datacenter a VM was located in by looking at its IP address.
  • 47. 36 CHAPTER 2 Foundation for building your private cloud■■ Licensing  It used to be that some vendors licensed their products to a singleIP address. This proved challenging to customers, so certain vendors changedtheir licensing to be based on the MAC address of the host’s NIC. This meantthat moving the VM (while keeping its IP) to another datacenter meant it had tokeep the same MAC address. Although this is typically possible within a single­management domain, this is impossible to predict when that VM was ­beingmoved, for instance, to a service provider or a private cloud provider.Looking back at these problems, I realized the key to avoiding them was to involvethe application and server operations. Although this sounds incredibly trivial intheory, it is incredibly difficult to do in practice. How often do you get involved ina project involving Network Virtualization if you are the corporate custodian orowner of the HR application, for instance?Server virtualization forced teams to learn to communicate with other teams.Network Virtualization will make that even more ­critical. When you decide toimplement Network Virtualization features found in ­Windows Server 2012, consideradding teams with operational ­experience to your team and ensure key applicationsupport teams are also consulted.Adiy QasrawiConsultant, Microsoft Consulting ServicesLearn moreFor an overview of how Network Virtualization works, see the topic “Hyper-VNetwork ­Virtualization Overview” in the TechNet Library at more detailed information about deploying Network Virtualization, see thetopic “­Network Virtualization technical details” in the TechNet Library at, be sure to watch the video “Building secure, scalable multi-tenant clouds usingHyper-V Network Virtualization” from Microsoft’s Build conference on Channel 9 at TechNet Script Center has the following demo scripts that show how to deploy­Network Virtualization:■■ Simple Hyper-V Network Virtualization Demo(■■ Simple Hyper-V Network Virtualization Script with Gateway( good source of information on Network Virtualization is the Windows Server2012 Hyper-V Network Virtualization Survival Guide, which can be found in the TechNet Wiki
  • 48. A complete virtualization platform CHAPTER 2 37at­hyper-v-network-virtualization-survival-guide.aspx.Note that System Center Virtual Machine Manager 2012 Service Pack 1 is requiredfor ­implementing Network Virtualization with Windows Server 2012 Hyper-V hosts.For more information about the System Center 2012 family of products, see Live MigrationLive Migration was introduced in Windows Server 2008 R2 to provide a high-availability­solution for VMs running on Hyper-V hosts. Live Migration uses the Failover ­Clustering­feature to allow running VMs to be moved between cluster nodes without perceived­downtime or loss of network connection. Live Migration provides the benefit of increasedagility by allowing you to move running VMs to the best host for improving performance,achieving better scaling, or ensuring optimal workload consolidation. Live Migration alsohelps increase productivity and reduce cost by allowing you to service your host machineswithout interruption or downtime for your virtualized workloads.Live Migration in Windows Server 2008 R2 required storing VMs on an Internet SmallComputer Systems Interface (iSCSI) or Fibre-Channel SAN. In addition, Live Migration in­Windows Server 2008 R2 supported performing only a single Live Migration at a time—multiple simultaneous Live Migrations were not supported.Now Live Migration in Windows Server 2012 has been improved in several significant ways.First, Live Migrations can be performed much more quickly. In fact, you can even saturate a10 GB network connection when performing a Live Migration between Windows Server 2012Hyper-V hosts, something you couldn’t do before with Windows Server 2008 R2 Hyper-Vhosts.A second improvement to Live Migration in Windows Server 2012 is that now you can­perform multiple Live Migrations simultaneously within the same failover cluster. Thismeans, for example, that if you needed to take down a particular cluster node for ­immediate­servicing, you can migrate all running VMs from that node to a different node quickly andsimultaneously in a single operation using either the GUI or a Windows PowerShell command.This can greatly simplify the task of performing maintenance on Hyper-V hosts within your­environment.A third improvement is that Live Migration is now possible even if you don’t have a failoverclustering infrastructure deployed. In the previous version of Windows Server 2008 R2, LiveMigration required installing the Failover Clustering feature, and you also needed to ensurethat Cluster Shared Volume (CSV) storage was enabled to ensure the logical unit number(LUN) on which your VM is stored could be accessed by any cluster node at any given time.With Windows Server 2012, however, you have two additional options for Live Migration thatcan be performed outside a failover clustering environment:
  • 49. 38 CHAPTER 2 Foundation for building your private cloud■■ You can store your VMs on a shared folder on your network, which lets you­live-migrate between non-clustered Hyper-V hosts while leaving the VM’s files onthe share.■■ You also can live-migrate a VM directly from one stand-alone Hyper-V host to anotherwithout using any shared storage at all.Let’s look at these two Live Migration options in a bit more detail.Live Migration using a shared folderWith Hyper-V in Windows Server 2012 you can now store all of a VM’s files on a sharedfolder on your network provided the shared folder is located on a file server running­Windows ­Server 2012 (see Figure 2-6). The reason the shared folder must be located ona file server running Windows Server 2012 is because this scenario is supported only throughthe new capabilities of version 3 of the server message block (SMB) protocol (SMB 3). Formore ­information about SMB 3 and the new continuously available file server capabilities of­Windows Server 2012, see the section titled “SMB 3,” later in this chapter.Live MigrationHost ASMB 3 file serverHost BVMFIGURE 2-6  Live Migration using SMB 3 shared storage but no clustering.
  • 50. A complete virtualization platform CHAPTER 2 39Live Migration using SMB 3 shared storage does not in itself provide high availability­unless the file share itself is also highly available. It does, however, also provide the ­benefitof enhanced VM mobility. And this added mobility can be achieved without the high costs­associated SANs and their associated switching fabric. SANs also add extra ­managementoverhead in the form of provisioning and managing LUNs. But by simply deployinga ­Windows Server 2012 file server, you can centralize storage of the VMs in your environmentwithout the added cost and management overhead associated with using a SAN.Live Migration using SMB 3 shared storage does have a couple of requirements to get itto work, namely the permissions on the share must be configured appropriately, constraineddelegation must be enabled in Active Directory directory service, and the path to the sharedstorage must be configured correctly in the VM’s settings. But once everything is set up­properly, the procedure for performing Live Migration is essentially unchanged from before.Experiencing SMB share hostingBeing an infrastructure consultant for the better part of my IT career has­included some very deep and thought-provoking discussions about the bestways to ­accomplish certain goals. Whether the dialogue was comprised of ­topicssuch as virtualization, storage, or applications, the common theme throughoutwas clearly protection of one’s digital assets and data. Everyone wanted to designa ­cost-effective (operative term) disaster recovery solution for their workloads­without affecting performance or user impact; however, we all know that you getwhat you pay for when disaster recovery solutions are concerned.It was recently told to me that the number two reason for a company ­implementinga virtualization strategy is disaster recovery. That makes sense to me; however, mostof the underlying infrastructure required for a physical server ­disaster recovery­environment is still required in the virtualized world. We still needed the ­replicationof our data through underlying storage. We still needed the like “cold spare”hardware to pick up where our primary servers left off. Don’t get me wrong—thesesolutions are fantastic, but in the end, are quite costly. There needed to be a wayto ensure that the smaller IT budgets in the world did not fall to the bottom of the“you get what you pay for in disaster recovery” bucket.Enter Windows Server 2012. In my opinion, this is truly the first “cloud-ready” pieceof software that I have seen capture the entire portfolio of cloud readiness features.Shifting the focus to the mobility of workloads (which is the basis for improvingupon current disaster recovery functionality) was clearly a theme when designingthis software. The ability to never have to turn your VM off, regardless of scenario,is the holy grail of disaster recovery.
  • 51. 40 CHAPTER 2 Foundation for building your private cloudSo of course, being an engineer, I wanted to play with this stuff. After ­installinga ­couple of Hyper-V and file servers, I decided to test an SMB share hosting myVM files. As generally non-complex as that sounds, it was quite cool to see my­associated virtual hard disk (VHD) be linked to a network path.Just a tip: For POC setup, make sure you have solid name resolution going on(which often gets overlooked in labs), or alternatively, use IP addresses.I decided to see what I could do with this share, and without knowing, stumbledupon an improvement to Live Migration. In Windows Server 2012, you can­seamlessly migrate a VM hosted on an SMB file share (This needs to be SMB 3—currently Windows Server 2012 only) to any other host in the same domain (givenshare permissions). I chose to move my machine to another host of mine, andbefore I was able to Alt-Tab to the documentation and back, my VM had alreadymoved. What I forgot about at the time of migration was that I never didany ­prerequisite storage configuration on any of the host machines, which madethe whole experience much more exciting. It just worked. I couldn’t wait to couplethis with the other optimization technologies built in to the operating system­(de-duplication and compression) for some real gains.Then, my engineering mind went to the next obviously logical step: “Okay, how canI break this thing?” The demos I had seen on this had shown Live Migration withworkloads such as pings and file copies. That just didn’t do it for me . . . I wantedmy VM to host streaming video. With my setup in place, I streamed not one, buttwo video files to different clients on my network and monitored them. One streamwas a simple AVI file hosted on a file share. The other was a high-definition videofile hosted by a server-side transcoder that was streaming to my laptop. I also hada ping going just for kicks. The CPU had settled around 30 percent on the VM onceboth videos were going, so I was interested to see what the results would be. OnceLive Migration kicked in, I was watching for any blip or interruption to the videofiles, with no result. The best interruption, almost amusingly, was a dropped ping inmy command prompt. Being overly satisfied with my little demo environment,I proceeded to watch the rest of my movie.To sum up, mobility is the key. There is a huge array of other features that WindowsServer 2012 comes to the table with. As you’re reading the rest of this book, keep inmind the high-level view of cloud readiness and how all of the features in WindowsServer 2012 play towards this common goal.Ted ArcherConsultant, Virtualization and Core Infrastructure
  • 52. A complete virtualization platform CHAPTER 2 41Live Migration without shared storageWindows Server 2012 also allows you to live migrate VMs between stand-alone Hyper-Vhosts without the use of any shared storage. This scenario is also known as Live MigrationWithout Infrastructure (or Shared Nothing Live Migration), and the only requirements arethat the two hosts must belong to the same Active Directory domain and that they must beusing processors from the same manufacturer (all AMD or all Intel, for instance). When Live­Migration without infrastructure is performed, the entire VM is moved from the first host tothe second with no perceived downtime. The process basically works like this (see Figure 2-7):1. The Virtual Machine Management Service (VMMS; Vmms.exe) on the first host(where the VM originally resides) negotiates and establishes a Live Migration­connection with the VMMS on the second host.2. A storage migration is performed, which creates a mirror on the second host of theVM’s VHD file on the first host.3. The VM state information is migrated from the first host to the second host.4. The original VHD file on the first host is then deleted and the Live Migration­connection between the hosts is terminated.Mirror VHD fileMigrate VM stateLive Migration connectionStand-alone host A Stand-alone host BFIGURE 2-7  How Live Migration without shared storage works in Windows Server 2012.Performing Live MigrationLive Migration can be performed from the GUI or using Windows PowerShell, but first youneed to enable Live Migration functionality on your host machines. This can be done by usingthe Hyper-V console to open the Hyper-V Settings dialog box, as shown in Figure 2-8.
  • 53. 42 CHAPTER 2 Foundation for building your private cloudFIGURE 2-8  Enabling Live Migrations in Hyper-V Settings.The tools that you can use to perform a Live Migration depend on the kind of Live­Migration you want to perform. Table 2-3 summarizes the different methods for performingLive Migrations in failover clustering environments, Live Migrations using SMB 3 shares, andLive Migrations without infrastructure.TABLE 2-3  Methods for performing different types of Live MigrationsType of Live Migration GUI tools Windows PowerShell cmdletsVM is on a cluster node andmanaged by the cluster.Failover Cluster Manager Move-ClusterVirtualMachineRoleMove-VMVM is on an SMB 3 share. Hyper-V Manager Move-VMVM is on a stand-alone host. Hyper-V Manager Move-VMWindows Server 2012 gives you great flexibility in how you perform Live Migrationsof running VMs, including moving different VM components to different locations on the­destination host when performing Live Migrations with or without shared storage. To see this,right-click a running VM in Hyper-V Manager and select Move to start the wizard for movingVMs. The first choice you make is whether to move the VM (and, optionally, its storage) to adifferent host or to move only the VM’s storage, as shown here:
  • 54. A complete virtualization platform CHAPTER 2 43Moving the storage of a running VM is called storage migration and is a new capability forHyper-V in Windows Server 2012. We’ll look at storage migration later in Chapter 3, “HighlyAvailable Easy-to-Manage Multi-Server Platform,” but for now, let’s say that you decide tomove the VM by selecting the first option discussed previously. Once you’ve specified thename of the host you want to move the VM to, you’re presented with three options:■■ Moving all the VM’s files to a single location■■ Moving different files of the VM to different locations■■ Moving all the VM’s files except its VHDsIn each case, the target locations could be a shared folder on a Windows Server 2012 fileserver or a local directory on the destination host:
  • 55. 44 CHAPTER 2 Foundation for building your private cloudIf you choose the second option of moving different files of the VM to different locations asshown here, you’re presented with additional options for specifying how to move the storage:Choosing to move the VM’s items to different locations lets you specify which items youwant to move, including the VHDs, current configuration, snapshot files, and smart pagingfiles for the VM:
  • 56. A complete virtualization platform CHAPTER 2 45Additional wizard pages allow you to specify the exact way in which these items should bemoved.Learn moreFor an overview of Live Migration improvements in Windows Server 2012, see thetopic ­“Virtual Machine Live Migration Overview in the TechNet Library at more information about how the Live Migration process works, see the topic “Virtual­Machine Live Migration Overview” in the TechNet Library at a step-by-step guide on configuring Live Migration without using failover clustering,see the topic “Configure and Use Live Migration on Non-clustered Virtual Machines” in the­TechNet Library at quality of service (QoS)In the section titled “Hyper-V extensible switch,” earlier in this chapter, we looked at thenew bandwidth management capabilities found in Hyper-V, which allows for guaranteeing aminimum amount of bandwidth and/or enforcing a maximum amount of bandwidth for eachVM running on a host. This is just one example, however, of the powerful new bandwidthmanagement capabilities built into Windows Server 2012. The term quality of service (QoS)refers to technologies used for managing network traffic in ways that can meet SLAs and/orenhance user experiences in a cost-effective manner. For example, by using QoS to ­prioritizedifferent types of network traffic, you can ensure that mission-critical applications and­services are delivered according to SLAs and to optimize user productivity.As we’ve previously seen in the earlier section, Hyper-V in Windows Server 2012 lets youspecify upper and lower bounds for network bandwidth used by VMs. This is an example ofsoftware QoS at work where packet scheduling is implemented by the operating system. ButWindows Server 2012 also supports implementing QoS through the use of network adapterhardware that use Data Center Bridging (DCB), a technology that provides performanceguarantees for different types of network traffic. DCB is typically found in 10 GbE networkadapters and certain kinds of switching fabrics.The enhanced QoS capabilities included in Windows Server 2012 are particularly useful inshared cloud environments, where the cloud provider wants to ensure that each customer(or business unit for shared private clouds) is able to access the computing, storage, andnetwork resources they need and have paid for or been guaranteed. Customers (and­departments of large enterprises) need predictable performance from applications and­services they access from the cloud, and the enhanced QoS capabilities in Windows Server2012 can help ensure this.
  • 57. 46 CHAPTER 2 Foundation for building your private cloudBut these enhanced QoS capabilities also can provide benefits to the cloud provider.­Previously, to ensure that all customers accessing a shared cloud have enough ­computing,storage, and network resources to meet their needs, cloud providers often overprovisionedVMs on the hosts in their datacenter by running fewer VMs on more hosts, plus extra­storage and network resources to ensure that each customer has enough. For example,the cloud ­provider might use separate networks for application, management, storage,and Live ­Migration traffic to ensure that each type of workload can achieve the requiredlevel of ­performance. But building and managing multiple physical networks like this canbe ­expensive, and the provider may have to pass the cost on to the customer to ensure­profitability.With the enhanced QoS capabilities in Windows Server 2012, however, cloud providers canensure that SLAs are met while using their physical host, storage, and network resources moreefficiently, which means cost savings from needing fewer hosts, less storage, and a simplernetwork infrastructure. For example, instead of using multiple overlapping 1 GbE networksfor different kinds of traffic, the provider can use a single 10 GbE network backbone (or twofor high availability) with each type of traffic carried on it being prioritized through the use ofQoS policies.From the perspective of enterprises wanting to build private clouds and hosting ­providerswanting to build public clouds, QoS allows replacing multiple physical networks with a singleconverged network carrying multiple types of traffic with each traffic type guaranteed aminimum amount of bandwidth and limited to a maximum amount of bandwidth.­Implementing a QoS solution thus can save enterprises and hosting providers money in twoways: less ­network hardware is needed and high-end network hardware such as 10 GbEnetwork ­adapters and switches can be used more efficiently. Note, however, that theconverged fabric still needs to be carved up into Management and Production networks forsecurity reasons.The bottom line is that the old approach of overprovisioning the network infrastructure foryour datacenter is inefficient from a cost point of view and now can be superseded by usingthe new QoS capabilities in Windows Server 2012. Instead of using multiple physical ­networkfabrics like 1 GbE, iSCSI, and Fibre Channel to carry the different kinds of traffic in your­multi-tenant datacenter, QoS and other enhancements in Windows Server 2012 now make itpossible to use a single converged 10 GbE fabric within your datacenter.Implementing QoSThere are a number of different ways of implementing software-based control of networktraffic in Windows Server 2012. For example:■■ You can configure Hyper-V QoS as described previously by enabling bandwidth­management in the settings of your VMs to guarantee a minimum amount of­bandwidth and/or enforcing a maximum amount of bandwidth for each VM.■■ You can use Group Policy to implement policy-based QoS by tagging packets with an802.1p value to prioritize different kinds of network traffic.
  • 58. A complete virtualization platform CHAPTER 2 47■■ You can use Windows PowerShell or WMI to enforce minimum and maximum­bandwidth and 802.1p or Differentiated Services Code Point (DSCP) marking on­filtered packets.There are additional ways of implementing QoS as well. The method(s) you choose will­depend upon the network infrastructure you have and the goals that you are trying toachieve. See the “Learn more” section for more information about QoS solutions for WindowsServer 2012.In terms of which QoS functionality to use in a given scenario, the best practice is toconfigure Hyper-V QoS for VMs and then create QoS policies when you need to tag traffic forend-to-end QoS across the network.QoS and the cloudIf you are a hosting provider or a large enterprise that wants to deploy a shared private cloudthat provides “servers for rent” to customers or business units, there are several ways that youcan configure Hyper-V QoS to assign a minimum bandwidth for each customer or businessunit that access applications and services from your cloud:■■ Absolute minimum bandwidth  In this scenario, you could set different service tierssuch as bronze for 100 Mbps access, silver for 200 Mbps access, and gold for 500 Mbpsaccess. Then you can assign the appropriate minimum bandwidth level for customersbased on the level of their subscription.■■ Relative minimum bandwidth  In this scenario, you could assign different weightsto different customer workloads such as a weight of 1 for normal priority workloads,2 for high-priority workloads, and 5 for critical-priority workloads. Then you could­assign a minimum bandwidth to each customer based on their workload weight­divided by the total weight of all customers accessing your cloud.Note that minimum bandwidth settings configured in Hyper-V QoS are applied onlywhen there is contention for bandwidth on the link to your cloud. If the link is underused, theconfigured minimum bandwidth settings will have no effect. For example, if you have twocustomers, one with gold (500 Mbps) access and the other with silver (200 Mbps) access, andthe link between the cloud and these customers is underused, the gold customer will nothave 500/200 = 2.5 times more bandwidth than the silver customer. Instead, each customerwill have as much bandwidth as they can consume.Absolute minimum bandwidth can be configured using the Hyper-V Settings in ­Hyper-VManager, as shown previously in this chapter. Absolute minimum bandwidth also can be­configured from Windows PowerShell by using the Set-VMSwitch cmdlet. Relative minimumbandwidth can be configured from Windows PowerShell only by using the Set-VMSwitch cmdlet.As far as configuring maximum bandwidth is concerned, the reason for doing this in cloudenvironments is mainly because wide area network (WAN) links are expensive. So if you area hosting provider and a customer accesses its “servers in the cloud” via an expensive WAN
  • 59. 48 CHAPTER 2 Foundation for building your private cloudlink, it’s a good idea to configure a maximum bandwidth for the customer’s workloads to cap­throughput for customer connections to their servers in the cloud.Data Center Bridging (DCB)Data Center Bridging (DCB) is an IEEE standard that allows for hardware-based bandwidthallocation for specific types of network traffic. The standard is intended for network adapterhardware used in cloud environments so that storage, data, management, and other kinds oftraffic all can be carried on the same underlying physical network in a way that guaranteeseach type of traffic its fair share of bandwidth. DCB thus provides an additional QoS ­solutionthat uses hardware-based control of network traffic, as opposed to the software-based­solution described previously.Windows Server 2012 supports DCB, provided that you have both DCB-capable Ethernetnetwork adapters and DCB-capable Ethernet switches on your network.Learn moreFor an overview of QoS improvements in Windows Server 2012, see the topic “Quality ofService (QoS) Overview” and its subtopics in the TechNet Library starting at the syntax of the Set-VMSwitch cmdlet, which can be used for configuringboth absolute and relative minimum bandwidth, see a discussion of how converged networks using QoS and other Windows Server2012 features can benefit your datacenter, see the post titled “Cloud Datacenter ­Network­Architecture in the Windows Server 2012 era” by Yigal Edery on the Private Cloud­Architecture Blog on TechNet at meteringResource metering is a new feature of Windows Server 2012 designed to make it easier tobuild solutions for tracking how cloud services are consumed. Such tracking is ­importantin both enterprise and hosting scenarios. For example, if a hosting provider provides­cloud-based ­applications and services to customers, the hosting provider needs a way oftracking how much ­resources those customers are consuming to bill them for their use ofthese resources. ­Similarly, if a large enterprise has deployed a shared private cloud that isaccessed by ­different business units within the organization, the enterprise needs a way oftracking how much cloud resources each business unit is consuming. This information may beneeded for internal billing purposes by the organization, or it may be used to help plan howcloud resources are allocated so that each business unit gets its fair share of the resourcesthey need.
  • 60. A complete virtualization platform CHAPTER 2 49Previously, enterprises or hosting providers who deployed shared private or public cloud­solutions ­using Hyper-V virtualization in Windows Server 2008 and Windows Server 2008 R2 hadto ­create their own chargeback solutions from scratch. Such solutions typically were ­implementedby polling performance counters for processing, memory, storage, and ­networking. With the newbuilt-in resource metering capabilities in Windows Server 2012, however, these ­organizationscan use Windows PowerShell to collect and report on historical resource usage of the ­followingmetrics:■■ Average CPU usage by a VM■■ Average physical memory usage by a VM■■ Minimum physical memory usage by a VM■■ Maximum physical memory usage by a VM■■ Maximum amount of disk space allocated to a VM■■ Total incoming network traffic for a virtual network adapter■■ Total outgoing network traffic for a virtual network adapterIn addition, these metrics can be collected in a consistent fashion even when the VMs aremoved between hosts using Live Migration or when their storage is moved using storagemigration. And for billing of network usage, you can differentiate between billable Internettraffic and non-billable internal datacenter traffic by configuring network metering port ACLs.Implementing resource meteringAs an example, let’s use resource metering to measure resource usage for a VM on ourHyper-V host. We’ll start by enabling resource metering for the VM SRV-A using the­Enable-VMResourceMetering cmdlet, and then we’ll verify that resource metering has been­enabled by piping the output of the Get-VM cmdlet into the Format-List cmdlet:Now we can use the Measure-VM cmdlet to report resource utilization data on our VM:
  • 61. 50 CHAPTER 2 Foundation for building your private cloudYou also can create resource pools for reporting usage for different types of resourcessuch as Processor, Ethernet, Memory or VHD. For example, you could create a new resourcepool named PoolOne using the New-VMResourcePool cmdlet:Then, once you’ve enabled resource metering on the new pool using the­Enable-VMResourceMetering cmdlet, you can use the Measure-VMResourceMetering cmdletto report processor utilization for the pool. You also can use the Reset-VMResourceMeteringcmdlet to reset the collection of resource metering data.Resource metering data can be collected, retrieved and reported by combining differentWindows PowerShell cmdlets using pipelines. To configure network metering port ACLs fordifferentiating different kinds of traffic, you can use the add-VMNetworkAdapterACL cmdlet.Learn moreFor an overview of resource metering in Windows Server 2012, see the topic “Hyper-V­Resource Metering Overview” in the TechNet Library at a list of Windows PowerShell cmdlets that can be used for managing Hyper-V in­Windows Server 2012, see the topic “Hyper-V Cmdlets in Windows PowerShell” in the­TechNet Library at scalability and performanceBuilding cloud solutions, whether with private or public clouds, requires investment oftime, energy, and money. To ensure best return on your investment, you need to build your­solution on a platform that can scale and perform well to meet the changing demands ofyour business. This means being able to take advantage of cutting-edge hardware that canprovide extreme performance while handling the largest possible workloads. It means beingable to use resources effectively at every level, while ensuring that SLAs can be met. It meansreducing the chances of mistakes occurring when maintenance tasks are performed. Andit means being able to monitor performance effectively to ensure computing, storage, andnetwork resources are used with maximum efficiency.Windows Server 2012 delivers a virtualization platform that can achieve the highest levelsof performance while delivering extreme scalability that enables new scenarios for migrating
  • 62. Increase scalability and performance CHAPTER 2 51massive workloads into the cloud. This section examines some new features in Hyper-V and inthe underlying operating system that enable such increased scalability and performance.Expanded processor and memory supportHyper-V in Windows Server 2008 R2 has been embraced by many organizations as away of making more efficient use of physical server hardware through virtualization and­consolidating server workloads. But limitations in the number of logical processors ­supportedon the host and for VMs, together with limitations of how much physical memory canbe ­supported on the host and assigned to VMs, has meant that Windows Server 2008 R2lacked sufficient scalability for certain types of mission-critical business applications. Forexample, large database applications often require large amounts of memory and manylogical ­processors when used to implement business solutions involving online transaction­processing (OLTP) or online transaction analysis (OLTA). Until now, the idea of moving suchapplications into the cloud has been mostly a dream.Windows Server 2012 changes all this in the following ways:■■ Through its increased processor and memory support on the virtualization host by enablingthe use of up to 160 logical processors and 2 TB of physical memory per host system■■ Through its increased virtual processor and memory support for VMs by enabling theuse of up to 32 virtual processors and 1 TB of memory per VMIncreased host processor and memory supportThe advent of Windows Server 2012 brings the expansion of processor andmemory support in Windows Server 2012. In Windows Server 2008 R2, thehost system had limitations of the amount of maximum logical processors (cores,­Hyper-Threading, individual CPUs) and memory available for use between the hostand the VM. To illustrate this point, note the following:Windows Server 2008 R2 SP1 had support for up to:■■ 64 logical processors per host■■ 1 TB of memory per host■■ 4 virtual processors per VM■■ 64 GB of memory per VMWindows Server 2012 now has support for up to:■■ 320 logical processors per host■■ 4 TB of memory per host■■ 64 virtual processors per VM (up to a maximum of 2,048 virtual processors per host)■■ 1 TB of memory per VM
  • 63. 52 CHAPTER 2 Foundation for building your private cloudPlease keep in mind that this is largely dependent on the configuration of yourhardware and the support of the guest operating system and integrated servicesthat are provided for the VM. The expansion of available processor and memoryallocations would allow for your administrators to allocate VM resources as needed.Since many enterprise scale applications continue to consume additional resourcesto feed the needs of the organization, Microsoft has taken a tone to assist with thisdemand by increasing this memory and processor support in Windows Server 2012.One of the points brought to our attention in Windows Server 2008 R2 Hyper-Vwas the limitation of the hardware portrayal to the VM. With a large number of ITorganizations seeking to consolidate their server farms to a handful of servers andvirtualize many large infrastructure applications such as Microsoft SQL Server andMicrosoft Exchange Server, we decided to move toward larger scalability for theseVMs in Windows Server 2012. With Windows Server 2012, the amount of virtual­processors that you can have on a SQL Server 2008 virtual machine can go to a maxtotal of 32 virtual CPUs. This is a large increase from the 4 in Windows Server 2008 R2.Additional RAM is another point that our customers had requested be available totheir virtual machines. With hardware able to run multiple terabytes of RAM andphysical systems running 32, 64, or 128 GB of RAM, the ability to provide moreRAM to the VM became needed as newer, advanced applications took advantageof the larger RAM available. In Windows Server 2012, we move from 64 GB of RAMlimitation to 1 TB per VM. This gives the organization the capacity to go to largermemory sizes if the hardware allocation allows.Patrick CatuncanSupport Escalation Engineer, High Availability, Platforms CoreVirtual NUMAIn addition to its expanded processor and memory support on hosts and for VMs, Hyper-V inWindows Server 2012 also expands support for Non-Uniform Memory Access (NUMA) fromthe host into the VM. NUMA allows the use of memory by processors to be optimized based onthe location of the memory with respect to the processor. High-performance ­applications likeMicrosoft SQL Server have built-in optimizations that can take advantage of the NUMA topologyof a system to improve how processor threads are scheduled and memory is allocated.In previous versions of Hyper-V, VMs were not NUMA-aware, which meant that when­applications like SQL Server were run in VMs, these applications were unable to take­advantage of such optimizations. Because NUMA was not used in previous versions, it was­possible for a VM’s RAM to span NUMA nodes and access non-local memory. There is aperformance impact when using non-local memory due to the fact that another memorycontroller (CPU) has to be contacted.
  • 64. Increase scalability and performance CHAPTER 2 53But with VMs now being NUMA-aware in Windows Server 2012, the performance of­applications like SQL Server can be significantly better. Note, however, that NUMA supportin VMs works in Hyper-V in Windows Server 2012 only when Dynamic Memory has not beenconfigured on the host.How it worksVirtual NUMA presents a NUMA topology within a VM so that the guest operating systemand applications can make intelligent decisions about thread and memory allocation thatare reflected in the physical NUMA topology of the host. For example, Figure 2-9 shows ­a­NUMA-capable four-socket host machine with four physical NUMA nodes labeled 1 through 4.Two VMs are running on this host, and two virtual NUMA nodes are presented within eachVM, and these virtual NUMA nodes align with physical NUMA nodes on the host based onpolicy. The result is that NUMA-aware applications like SQL Server installed on the guest­operating system of one of these VMs would be able to allocate its thread and memory­resources as if it was running directly upon a physical server that had two NUMA nodes.Virtual NUMA and failover clusteringVirtual NUMA support also extends into high-availability solutions built using failover­clustering in Windows Server 2012. This enables the failover cluster to place VMs more­appropriately by evaluating the NUMA configuration of a node before moving a VM to thenode to ensure the node is able to support the workload of the VM. This NUMA-­awarenessfor VMs in failover clustering environments helps reduce the number of failover attemptswhich results in increased uptime for your VMs. See Chapter 3 for more information­concerning failover clustering enhancements in Windows Server 2012.Learn moreFor an overview of expanded processor and memory support for both hosts andVMs in ­Windows Server 2012, see the topic “Hyper-V Support for Scaling Up and Scaling OutOverview” in the TechNet Library at
  • 65. 54 CHAPTER 2 Foundation for building your private cloudVirtualMachine BVirtualMachine AvNUMA node ANUMA node 1HostvNUMA node B vNUMA node A vNUMA node BNUMA node 2 NUMA node 3 NUMA node 4FIGURE 2-9  Example of virtual NUMA at work.Network adapter hardware accelerationBesides the increased processor and memory support available for both hosts and VMs,­Windows Server 2012 also supports various hardware acceleration features of ­high-end­network adapter hardware to ensure maximum scalability and performance in cloud­scenarios. As Figure 2-10 shows, most of these features can be enabled in the Hyper-V­Settings of Hyper-V Manager, provided that your network adapter hardware supports thesefunctionalities.Virtual Machine Queue (VMQ)Virtual Machine Queue (VMQ) was first available for the Hyper-V role in Windows Server2008 R2 for host machines that had VMQ-capable network adapter hardware. VMQ ­employshardware packet filtering to deliver packets from an external VM network directly to VMs­using Direct Memory Access (DMA) transfers. This helps reduce the overhead of routingpackets from the host to the VM, which helps improve the performance of the host ­operating
  • 66. Increase scalability and performance CHAPTER 2 55system by distributing the processing of network traffic for multiple VMs among multipleprocessors. Previously, all network traffic was handled by a single processor.FIGURE 2-10  Enabling use of the hardware acceleration capabilities of high-end network adapter­hardware on Hyper-V hosts.NDIS 6.30 in Windows Server 2012 includes some changes and enhancements in howVMQ is implemented. For example, splitting network data into separate look-ahead buffersis no longer supported. In addition, support for Static VMQ has been removed in WindowsServer 2012. Drivers using NDIS 6.3 will automatically access Dynamic VMQ capabilities thatare new in Windows Server 2012.Although in Windows Server 2008 R2 you had to use System Center Virtual Machine­Manager to enable VMQ for a VM on a Hyper-V host, beginning with Windows Server2012, you can enable VMQ directly from within the VM’s settings exposed through­Hyper-V ­Manager, as discussed previously. Windows Server 2012 also includes several new­Windows PowerShell cmdlets, such as Set-NetAdapterVmq, Get-NetAdapterVmq, and­Get-­NetAdapterVmqQueue, that can be used to manage the VMQ properties of networkadapters.
  • 67. 56 CHAPTER 2 Foundation for building your private cloudIPsec task offloadInternet Protocol Security (IPsec) task offload was first available for servers running WindowsServer 2008 that had network adapters that supported this functionality. IPsec task offload worksby reducing the load on the system’s processors by performing the computationally intensive jobof IPsec encryption/decryption using a dedicated processor on the network adapter. The resultcan be a dramatically better use of the available bandwidth for an IPsec-enabled computer.Beginning with Windows Server 2012, you can enable IPsec task offload directly fromwithin the VM’s settings exposed through Hyper-V Manager, as detailed ­previously.­Windows Server 2012 also includes some new Windows PowerShell cmdlets, such as­Set-­NetAdapterIPsecOffload and Get-NetAdapterIPsecOffload, that can be used to managethe IPsec Offload properties of network adapters.Single-root I/O virtualizationSingle root I/O virtualization (SR-IOV) is an extension to the PCI Express (PCIe) specification,which enables a device such as a network adapter to divide access to its resources amongvarious PCIe hardware functions. As implemented in the Hyper-V role of Windows Server2012, SR-IOV enables network traffic to bypass the software switch layer of the Hyper-V­virtualization stack to reduce the I/O overhead in this layer. By assigning SR-IOV capable­devices directly to a VM, the network performance of the VM can be nearly as good as thatof a physical machine. In addition, the processing overhead on the host is reduced.Beginning with Windows Server 2012, you can enable SR-IOV directly from within the VM’ssettings exposed through Hyper-V Manager, as shown in Figure 2-11. Before you can do this,however, the virtual switch that the VM uses must have SR-IOV enabled on it, and you alsomay need to install additional network drivers in the guest operating system of the VM. Youcan ­enable SR-IOV on a virtual switch only when you create the switch using the Virtual Switch­Manager of Hyper-V Manager or by using the New-VMSwitch cmdlet when using Windows­PowerShell. Windows Server 2012 also includes some new Windows PowerShell cmdlets, suchas Set-NetAdapterSriov, Get-NetAdapterSriov, and Get-NetAdapterSriovVf, that can be usedto manage the SR-IOV properties of network adapters, such as the number of virtual functions(VFs), virtual ports (VPorts), and queue pairs for default and non-default VPorts.Note that only SR-IOV supports 64-bit guest operating systems (specifically WindowsServer 2012 and 64-bit versions of Windows 8). In addition, SR-IOV requires both hardwareand ­firmware support in the host system and network adapter. If you try to configure a guest­operating system to use SR-IOV when either the hardware or firmware is not supported, theNetwork tab in Hyper-V Manager will display “Degraded (SR-IOV not operational).”
  • 68. Increase scalability and performance CHAPTER 2 57FIGURE 2-11  SR-IOV must be configured on the virtual switch before it can be configured for the VM.Learn moreFor more information about SR-IOV in Windows Server 2012, see the topic “Overview of SingleRoot I/O Virtualization (SR-IOV)” in the Windows Hardware Development Center on MSDN at good source of information about SR-IOV support in Windows Server 2012 isthe series of blog posts by John Howard on this topic. You can find links to all of these postson the following page of the TechNet Wiki:­articles/9296.hyper-v-sr-iov-overview.aspx.Also, be sure to see the post by Yigal Edery titled “Increased Network Performance usingSR-IOV in Windows Server 2012” on the Private Cloud Blog at a list of Windows PowerShell cmdlets included in Windows Server 2012 that can beused to manage network adapters, see the topic “Network Adapter Cmdlets in Windows­PowerShell” in the TechNet Library at
  • 69. 58 CHAPTER 2 Foundation for building your private cloudOffloaded Data Transfer (ODX)Another performance and scalability improvement in Windows Server 2012 revolves aroundstorage, in particular when storing VMs on storage arrays. Offloaded Data Transfer (ODX) isa feature of high-end storage arrays that uses a token-based mechanism to read and writedata within and between such arrays. Using ODX, a small token is copied between the sourceand destination servers instead of routing data through the host (see Figure 2-12). So whenyou migrate a VM within or between storage arrays that support ODX, the only thing copiedthrough the servers is the token representing the VM file, not the underlying data in the file.Intelligent Storage ArrayVirtualDiskVirtualDiskActual Data TransferOffloadReadToken TokenOffloadWriteTokenFIGURE 2-12  How offloaded data transfer works in a Hyper-V environment.The performance improvement when using ODX-capable storage arrays in cloud­environments can be astounding. For example, instead of taking about three minutes to createa new 10 GB fixed VHD, the entire operation can be completed in less than a second! OtherVM operations that can benefit just as much using ODX-capable storage hardware include:■■ Expansion of dynamic VHDs■■ Merging of VHDs■■ Live Storage MigrationODX also can provide benefit in nonvirtualized environments, such as when transferringlarge database files or video files between servers.
  • 70. Increase scalability and performance CHAPTER 2 59Learn moreFor more information about ODX support in Windows Server 2012, see the article titled­“Windows Offloaded Data Transfers Overview” in the TechNet Library at additional information, see the topic “Offloaded Data Transfers” in the WindowsDev Center on MSDN at and removed networking and Hyper-V featuresCertain networking and Hyper-V features have been deprecated in WindowsServer 2012, which means that these features likely will not be included in­future versions of ­Windows Server. The features that are now deprecated include:■■ The Network Driver Interface Specification (NDIS) version 5.0, 5.1, and 5.2­application programming interfaces (APIs)■■ The WMI RootVvirtualization namespace (replaced by the new namespace­Root­Virtualizationv2)■■ Windows Authorization Manager (AzMan)In addition, support for the following networking and Hyper-V features has been­removed from Windows Server 2012:■■ SMS.sys (SMB functionality is now provided by the Winsock Kernel)■■ VM Chimney (also called TCP Offload)■■ Static VMQ■■ NetDMAFor more information, see the topic “Features Removed or Deprecated in WindowsServer 2012” in the TechNet Library at for 4 KB sector disksWindows Server 2012 now includes support for large-sector disks. These disks representthe newest trend in the storage industry whereby the old 512-byte sector format is beingreplaced by the new 4,096-byte (4 KB) format to meet demand for increased disk capacity.Hyper-V in Windows Server 2012 now supports hosting VHD files on disks that have eitherthe native 4-KB format or the transitional 512-byte emulation (512e) mode.
  • 71. 60 CHAPTER 2 Foundation for building your private cloud4K sector support and the real userWith the introduction of Advanced Format storage devices, vendors foundthe way to increase effectiveness of error correction schemas for large harddrives. The change of format, however, brought certain difficulties.All versions of Windows up to Windows 7 SP1 support native 512-byte sectorread/writes, and via a special emulation method called 512e, can work with ­bigger­sector drives, hiding the physical sector size over logically presented 512-bytevalues.However, some file formats are hard-coded to work with physical sectors and won’taccept values other than 512 bytes. VHD specification version 1.0 is a sample ofsuch a format. You can connect a brand-new 4 TB disk to Windows 7 or WindowsServer 2008 R2, and you can put your media or data on it. You’ll fail to create aVHD for Hyper-V or iSCSI. Even if you copy VHD to the drive, you would fail to useit. Windows 8 and Windows Server 2012 bring native support for the Advancedformat, as well as the updated VHD and VHDX specifications.Finally, you can always check the physical sector size via fsutil fsinfo ntfsinfo <driveletter>.Alex A. KibkaloArchitect, Microsoft MEA HQLearn moreFor more information about 4K sector support in Windows Server 2012, see the articletitled “Hyper-V Support for Large Sector Disks Overview” in the TechNet Library at ­ Memory improvementsDynamic Memory was introduced for Hyper-V in Windows Server 2008 R2 as a way of­enabling virtualization hosts to make more effective use of physical memory allocated to VMsrunning on the host. Dynamic Memory works by adjusting the amount of memory availableto the VM in real time. These adjustments in memory allocation are based on how muchmemory the VM needs and on how Dynamic Memory has been configured on the VM.Dynamic Memory provides important scalability and performance benefits, especiallyfor virtual desktop infrastructure (VDI) environments, where at any given time, a subset ofthe VMs running on the host tend either to be idle or to have a relatively low load. By using­Dynamic Memory in such scenarios, you can consolidate greater numbers of VM on yourHyper-V hosts. The result is that you’ll need fewer hosts for provisioning virtual desktops
  • 72. Increase scalability and performance CHAPTER 2 61to your user population, which means you won’t need to procure as much high-end server­hardware. In other words, Dynamic Memory can help you save money.Configuring Dynamic MemoryDynamic Memory is enabled on a per-VM basis. You can enable and configure DynamicMemory in the Memory section of the VM’s settings in Hyper-V Manager, as shown inFigure 2-13 below. You also can enable and configure Dynamic Memory using ­Windows­PowerShell by using the Set-VM cmdlet, which can be used to configure the various­properties of a VM. Note that you can enable or disable Dynamic Memory only when the VMis in a stopped state.FIGURE 2-13  Configuring Dynamic Memory for a VM.Configuration options for Dynamic Memory for VMs on Hyper-V hosts running WindowsServer 2008 R2 were as follows:■■ Startup RAM  The amount of memory needed for starting the VM■■ Maximum RAM  The maximum amount of memory that the VM can use■■ Memory buffer  An amount of memory (as a percentage of the amount that the VMactually needs to perform its workload) that can be allocated to the VM when there issufficient memory available on the host
  • 73. 62 CHAPTER 2 Foundation for building your private cloud■■ Memory weight  A parameter that determines how available memory on the host isallocated among the different VMs running on the hostConfiguration options for Dynamic Memory for VMs on Hyper-V hosts running WindowsServer 2012 have been enhanced in several ways:A new configuration setting called Minimum Memory allows you to specify the ­minimumamount of memory that the VM can use when it is running. The reason for introducingthis new setting is because Windows generally needs more memory when starting thanit does when idle and running. As a result of this change, you now can specify sufficientstartup memory to enable the VM to start quickly and then a lesser amount of memory (the­minimum memory) for when the VM is running. That way, a VM can get some extra memoryso it can start properly, and then once it’s started, Windows reclaims the unneeded memoryso other VMs on the host can use the reclaimed memory if needed.Another change in the way that Dynamic Memory can be configured in Windows Server2012 is that now you can modify the maximum and minimum memory settings while the VMis running. In Windows Server 2008 R2, the maximum memory setting could be modified onlywhen the VM was in a stopped state. This change gives you a new way of quickly provisioningmore memory to a critical VM when needed.Smart PagingSpecifying a minimum memory for a VM can enable Windows to reclaim some unneededmemory once the VM has started. Then this reclaimed memory can be reallocated towardsother VMs on the host. But this raises a question: What if you start as many VMs as you canon a host, allow Windows to reclaim unneeded memory once the VMs are running, then startmore VMs using the reclaimed memory, then again allow Windows to reclaim any ­additionalunneeded memory, then try to start more VMs on the host . . . and so on? ­Eventually, youreach the point where almost all the host’s memory is in use and you’re unable to startany more VMs. But then you find that one of your running VMs needs to be restarted­immediately (for example, to apply a software update). So you try and restart the VM, and itshuts down successfully but it won’t start again. Why not? Because there’s not enough freememory on the host to meet the Startup RAM criterion for that VM.To prevent this kind of scenario from happening while enabling Dynamic Memory towork its scalability magic, Hyper-V in Windows Server 2012 introduces a new feature calledSmart Paging (see Figure 2-14). Smart Paging allows a VM that’s being restarted to use disk­resources temporarily on the host as a source for any additional memory needed to restartthe VM successfully. Then, once the VM has started successfully and its memory requirementslessen, Smart Paging releases the previously used disk resources because of the performancehit that such use can create.
  • 74. Increase scalability and performance CHAPTER 2 63FIGURE 2-14  Smart Paging works with Dynamic Memory to enable reliable VM restart operations.Smart Paging is used only when a VM is restarted and there is no free physical memory onthe host and no memory can be reclaimed from other running VMs. Smart Paging is not usedif you simply try and start a VM that’s in a stopped state, or if a VM is failing over in a cluster.Viewing Dynamic Memory at workSometimes small changes make a big difference in the usability of a user interface ­feature. Inthe Hyper-V Manager of Windows Server 2008 R2, you could monitor in real time how muchphysical memory was allocated to each VM that had Dynamic Memory enabled on it. Thisreal-time allocation amount is called the assigned memory. In addition, you could ­monitor thememory demand (the total committed memory) and the memory status ­(whether the currentamount of memory assigned to the VM as a buffer is sufficient) for the VM. The problem,though, was that these real-time measurements were displayed as ­columns in the Virtual­Machines pane of Hyper-V Manager, which meant that you had to scroll ­horizontally tosee them.
  • 75. 64 CHAPTER 2 Foundation for building your private cloudHyper-V in Windows Server 2012 adds a series of tabs to the bottom central pane, andby selecting the Memory pane, you can view the assigned memory, memory demand, andmemory status for the selected VM quickly (see Figure 2-15).FIGURE 2-15  Using Hyper-V Manager to display real-time changes in memory usage by a VM with­Dynamic Memory enabled.You also can use the Get-VM cmdlet in Windows PowerShell to display these same­real-time ­measurements, as shown in Figure 2-16.FIGURE 2-16  Using Windows PowerShell to display real-time changes in memory usage by a VM withDynamic Memory enabled.
  • 76. Increase scalability and performance CHAPTER 2 65Learn moreFor more information about Dynamic Memory in Windows Server 2012, see the articletitled “Hyper-V Dynamic Memory Overview” in the TechNet Library at a list of Windows PowerShell cmdlets that can be used for managing Hyper-V in­Windows Server 2012, see the topic “Hyper-V Cmdlets in Windows PowerShell” in the­TechNet Library at Fibre ChannelExisting technologies often present obstacles when considering the migration of your serverworkloads into the cloud. An example of this might be if you have an AlwaysOn failovercluster instance running on SQL Server 2012 that’s configured to use a Fibre Channel SANfor high performance. You’d like to migrate this workload into the cloud, but Hyper-V in­Windows Server 2008 R2 does not support directly connecting to Fibre Channel from withinVMs. As a result, you’ve postponed performing such a migration because you want to protectyour existing investment in expensive Fibre Channel technology.Virtual Fibre Channel removes this blocking issue by providing Fibre Channel ports withinthe guest operating system of VMs on Hyper-V hosts running Windows Server 2012. This nowallows a server application like SQL Server running within the guest operation system of a VMto connect directly to LUNs on a Fibre Channel SAN.Implementing this kind of solution requires that the drivers for your HBAs support VirtualFibre Channel. Some HBAs from Brocade and QLogic already include such updated ­drivers,and more vendors are expected to follow. Virtual Fibre Channel also requires that you­connect only to LUNs, and you can’t use a LUN as boot media for your VMs.Virtual Fibre Channel also provides the benefits of allowing you to use any advanced­storage functionality of your existing SAN directly from your VMs. You can even use it tocluster guest operating systems over Fibre Channel to provide high availability for VMs. SeeChapter 3 for more information about high-availability solutions in Windows Server 2012.Note that VMs must use Windows Server 2008, Windows Server 2008 R2, or WindowsServer 2012 as the guest operating system if they are configured with a virtual Fibre Channeladapter. For more information, see the topic “Hyper-V Virtual Fibre Channel Overview,” at moreFor more information about Virtual Fibre Channel in Windows Server 2012, see the articletitled “Hyper-V Virtual Fibre Channel Overview” in the TechNet Library at
  • 77. 66 CHAPTER 2 Foundation for building your private cloudSMB 3Windows Server 2012 introduces SMB 3, version 3 of the Server Message Block (SMB)­protocol to provide powerful new features for continuously available file servers. SMB is anetwork file sharing protocol that allows applications to read and write to files and to requestservices from services over a network. (Note that some documentation on TechNet andMSDN still refer to this version as SMB 3.)The improvements in SMB 3 are designed to provide increased performance, reliability,and availability in scenarios where data is stored on file shares. Some of the new features andenhancements in SMB 3 include:■■ SMB Direct  Enables using network adapters capable of Remote Direct Memory­Access (RDMA) such as iWARP, Infiniband, or RoCE (RDMA over Converged Ethernet)that can function at full speed and low latency with very little processor overhead onthe host. When such adapters are used on Hyper-V hosts, you can store VM files on aremote file server and achieve performance similar to if the files were stored locally onthe host.SMB Direct makes possible a new class of file servers for enterprise environments, andthe new File Server role in Windows Server 2012 demonstrates these capabilities in full.Such file servers experience minimal processor utilization for file storage processingand the ability to use high-speed RDMA-capable NICs including iWARP, InfiniBand, andRoCE. They can provide remote storage solutions equivalent in performance to FibreChannel, but at a lower cost. They can use converged network fabrics in d­atacentersand are easy to provision, manage, and migrate.■■ SMB Directory Leasing  Reduces round-trips from client to server because ­metadatais retrieved from a longer living directory cache. Cache coherency is maintained asclients are notified when directory information changes on the server. The result of­using SMB Directory Leasing can be improved application response times, especially inin branch office scenarios.■■ SMB Encryption  Enables end-to-end encryption of SMB data to protect networktraffic from eavesdropping when travelling over untrusted networks. SMB ­Encryptioncan be configured either on a per-share basis or for the entire file server. It adds nocost overhead and removes the need for configuring IPsec and using specialized­encryption hardware and WAN accelerators.■■ SMB Multichannel  Allows aggregation of network bandwidth and network faulttolerance when multiple paths become available between the SMB client and the SMBserver. The benefit of this that it allows server applications to take full advantage of allavailable network bandwidth. The result is that your server applications become moreresilient to network failure.
  • 78. Increase scalability and performance CHAPTER 2 67SMB Multichannel configures itself automatically by detecting and using multiplen­etwork paths when they become available. It can use NIC teaming failover but doesn’trequire such capability to work. Possible scenarios can include:■■ Single NIC, but using Receive-Side Scaling (RSS) enables more processors to processthe network traffic■■ Multiple NICs with NIC Teaming, which allows SMB to use a single IP address perteam■■ Multiple NICs without NIC Teaming, where each NIC must have a unique IP addressand is required for RDMA-capable NICs■■ SMB-specific Windows PowerShell cmdlets  Provides Windows PowerShell cmdletsand WMI objects to manage SMB file servers and SMB file shares.■■ SMB Scale Out  Allows you to create file shares that provide simultaneous accessto data files with direct I/O through all the nodes in your file server cluster. The resultis improved use of network bandwidth and load balancing of the file server clients,and also optimization of performance for server applications. SMB Scale Out requires­using CSV version 2, which is included in Windows Server 2012, and lets you seamlesslyincrease available bandwidth by adding cluster nodes.■■ SMB3 Secure Dialect Negotiation  Helps protect against man-in-the-middle­attacks, where eavesdroppers attempt to downgrade the initially negotiated dialectand capabilities be-tween an SMB client and an SMB server.■■ SMB Transparent Failover  Allows administrators to perform hardware or ­softwaremaintenance of nodes in a clustered file server without interruption to server­applications storing their data on file shares. If a hardware or software failure happenson a cluster node, SMB clients will reconnect transparently to another cluster nodewith no interruption for server applications storing data on these shares.SMB Transparent Failover supports both planned failovers (such as ­maintenance­operations) and unplanned failovers (for example, due to hardware failure).­Implementing this feature requires the use of failover clustering, that both the serverrunning the application and the file server are running Windows Server 2012, and thatthe file shares on the file server have been shared for continuous availability.■■ VSS for SMB file shares  Allows SMB clients and SMB servers supporting SMB 3.0 totake advantage of the Volume Shadow Copy Service (VSS) for SMB file shares.The implementation of SMB 3 in Windows Server 2012 also includes new SMB­performance counters that can provide detailed, per-share information about throughput,latency, and I/O per second (IOPS). These counters are designed for server applications likeHyper-V and SQL Server, which can store files on remote file shares to enable administratorsto analyze the performance of the file shares where server application data is stored.
  • 79. 68 CHAPTER 2 Foundation for building your private cloudBenefits for Hyper-VThese new capabilities of SMB 3 mean that Hyper-V hosts can store VM files, including theconfiguration, VHD, and snapshots in file shares on Windows Server 2012 file servers. You canimplement this kind of solution for stand-alone Hyper-V servers. You also can implement itfor clustered Hyper-V servers where file storage is used as shared storage for the cluster.The benefits that enterprises can experience from these scenarios include simplified­provisioning, management and migration of VM workloads, increased flexibility, and reduced cost.SMB and Windows PowerShellYou can view and manage many SMB 3 capabilities by using Windows PowerShell. To seewhat cmdlets are available for doing this, you can use the Get-Command cmdlet, as shownin Figure 2-17.FIGURE 2-17  Windows PowerShell cmdlets for managing SMB features and infrastructure.For example, Figure 2-18 shows how to use the Get-SMBServerConfiguration cmdlet to­determine whether SMB Multichannel is enabled on a file server running Windows Server 2012.Learn moreFor more information on the new SMB features introduced in the Windows Server 2012 file server,see the topic “Server Message Block Overview” in the TechNet Library at an analysis of the performance capabilities of the new SMB file-sharing protocol over 10GB Ethernet interfaces, see the blog post titled “SMB 2.2 is now SMB 3.0” on the WindowsServer Blog at
  • 80. Increase scalability and performance CHAPTER 2 69FIGURE 2-18  Viewing the configuration settings of the SMB server.For more information about the new Windows PowerShell cmdlets for managing SMBFile Servers and SMB File Shares, see the post titled “The basics of SMB PowerShell, a featureof Windows Server 2012 and SMB 3.0” on Jose Barreto’s blog at­josebda/archive/2012/06/27/the-basics-of-smb-powershell-a-feature-of-windows-­server-2012-and-smb-3-0.aspx.For more information about SMB Multichannel, see the post titled “The basics of SMB­Multi-channel, a feature of Windows Server 2012 and SMB 3.0” on Jose Barreto’s blog at more information about VSS for SMB File Shares, see the post titled “Windows Server2012 and SMB 3.0—VSS for SMB File Shares” on Jose Barreto’s blog at more information about SMB Encryption, see the post by Obaid Farooqi titled“­Encryption in SMB3” on the Microsoft Open Specifications Support Team Blog at more information about SMB3 Secure Dialect Negotiation, see the post by Edgar­Olougouna titled “SMB3 Secure Dialect Negotiation” on the Microsoft Open SpecificationsSup-port Team Blog at
  • 81. 70 CHAPTER 2 Foundation for building your private cloudWindows Server 2012: Enabling the “Storage LAN”Everyone is familiar with the concept of a SAN. Typically a very expensive diskarray, attached to some very expensive fiber channel switches. Then one or moreFibre Channel cables run from the switch to a fairly expensive dual-port HBA.SANs have long been one of the most expensive and difficult things to manage inthe datacenter. Enterprise organizations invest heavily in storage and invest ­heavilyin storage training. Your average Windows administrator is not equipped withthe skills required to manipulate and design enterprise storage, yet every serverof ­consequence is typically directly connected to enterprise storage. ConfiguringHBAs, LUN mapping, and similar tasks is often per-server, manual, and reserved forthe select few who have the extra training and experience.Virtualization improves this, as long as your servers use either iSCSI (which is oftenregarded as a poor man’s SAN), or are self-contained in a VHD. Mapping SAN­storage, directly into VMs, is not trivial, quick, or easy.Windows Server 2012 with the introduction of the continuously available file server,and SMB 3 change this. It allows Windows administrators to disconnect themselvesfrom the traditional SAN and create a new breed of “Storage LAN.” Consider thisexample. In the past, when you deployed a new SQL Server instance, you did one ofthe ­following:■■ Deploy to a physical host. Install an HBA. Create a SAN LUN. Run the fiber to theserver. Map the LUN to the host, and then use the storage for SQL Server.■■ Deploy to a VM. Store the VM in a VHD which was stored on, most often a CSVvolume, which was on a LUN previously mapped to the host.■■ Deploy to a VM. Install an HBA. Create a SAN LUN. Run the fiber to the server. Mapthe LUN to the host, and then pass the LUN to the VM as a pass-through disk.Windows Server 2012 changes this by allowing you to replace much of your storageinfrastructure with traditional Ethernet. LUNs are replaced with file shares. Here’swhat this new architecture looks like.You still have your high-end storage solution; however, instead of running complexstorage fabric to every host, you run the storage fabric to a set of high-performancefile servers. These file servers present the storage as highly available file shares to beused by any server.Next, you create an Ethernet segment between your storage file servers andyour application servers leveraging technologies such as 10 GB Ethernet (which is­standard on most high-end servers), or if you need extremely fast performance (andyour storage arrays can even keep up with it), RDMA.
  • 82. Increase scalability and performance CHAPTER 2 71When new servers are brought online, instead of running fiber, provisioning LUNs,and involving your storage administrators, you can simply provision a share oruse an existing one. This change allows a Windows administrator to use the skillsand tools they already have, and are familiar with, to present highly available,­high-performance storage to any application server. You can deploy applicationworkloads such as SQL, and even Hyper-V, which leverage the performance and­reliability of enterprise SAN storage without needing to be directly connected tothe enterprise SAN fabric.With technologies such as transparent failover, cluster-aware updating, and storagespaces with thin provisioning, you can now plan for what you need tomorrow, butdeploy and manage with what you have today.Corey HynesArchitect, holSystems ( VM importThe process used for importing VMs onto Hyper-V hosts has been improved in WindowsServer 2012. The goal of these improvements is to help prevent configuration problems fromhappening that can prevent the import process from completing successfully.In Hyper-V on Windows Server 2008 R2, when you imported a VM onto a host, the VMand all its files were copied to the host, but they weren’t checked for possible configurationproblems. However, Hyper-V on Windows Server 2012 now validates the configuration of VMfiles when they are imported to identify potential problems and, if possible, resolve them.An additional enhancement to the process of importing VMs in Hyper-V on WindowsServer 2012 is that now you can import a VM after manually copying the VM’s files to thehost. In other words, you don’t have to export a VM from one host before you can import itinto another host—you can simply copy the files from the first host to the second one andthen initiate the import process.Importing of VMsWindows Server 2012 has improved the VM import process. This new ­processhelps you resolve configuration problems that would otherwise have­prevented you from importing the VM. The Windows Server 2012 improvementsto importing a VM also have improved the reliability of importing VMs to otherHyper-V host computers.The new wizard detects and fixes potential problems, such as hardware or file­differences, that might exist when a VM is moved to another host. The import
  • 83. 72 CHAPTER 2 Foundation for building your private cloud­wizard detects and fixes more than 40 types of incompatibilities. This new wizardalso creates a temporary copy of the VM configuration file as an added safety step.With Windows Server 2008 R2 to import a VM to a different host, you firstneeded to export it. To export the VM, you first needed to turn it off. This caused­administrators to schedule downtime prior to exporting the VM. Now, with­Windows Server 2012, you can simply copy the VM’s files manually to the newhost, and then, on the new Windows Server 2012 host, just run through the Import­Virtual Machine wizard, point to the newly copied VM, and voila! You have imported it.In conclusion, the Windows Server 2012 Import wizard is a simpler, better way toimport or copy VMs between Hyper-V hosts.Keith HillSupport Escalation Engineer, High Availability, Platforms CoreLearn moreFor more information on the improved VM import capabilities in Windows Server 2012, seethe topic “Simplified Import Overview” in the TechNet Library at disk formatVHDX is the new default format for VHDs in Hyper-V in Windows Server 2012. This new­format is designed to replace the older VHD format and has advanced capabilities that makeit the ideal virtual disk format going forward for virtualized workloads. Some of the featuresof this new format include the following:■■ It supports virtual disks up to 64 TB in size, so you’ll be able to use it to virtualize eventhe largest database workloads and move them into the cloud.■■ It aligns to megabyte boundaries to support large sector disks (4 KB sector disks), soyou can take advantage of new low-cost commodity storage options.■■ It uses large block sizes to provide better performance than the old format could provide.■■ It includes a new log to protect from corruption due to power failure, which means thenew format has much greater resiliency than the old format.■■ You can embed custom user-defined metadata into VHDX files; for example,­information about the service pack level of the guest operating system on the VM.Learn moreFor more information about the new VHDX format in Windows Server 2012, see the articletitled “Hyper-V Virtual Hard Disk Format Overview” in the TechNet Library at
  • 84. Business continuity for virtualized workloads CHAPTER 2 73Business continuity for virtualized workloadsNo cloud solution would be workable without a viable disaster recovery solution. Virtualizedworkloads owned by business units in large enterprises or by customers of cloud hostingproviders must be backed up regularly to prevent loss of continuity should a disaster ­occuron the provider’s infrastructure. This chapter ends with a look at Hyper-V Replica, a new­feature of Hyper-V in Windows Server 2012 that helps ensure that your cloud solutions can be­recovered in the event of a disaster.Hyper-V ReplicaWhile many third-party backup solutions can be used for backing up and recovering VMsrunning on Hyper-V hosts, the Hyper-V Replica feature in Windows Server 2012 provides anin-box business continuity solution for cloud environments that can efficiently, periodically,and asynchronously replicate VMs over IP-based networks, including slow WAN links andacross different types of storage subsystems. The Hyper-V Replica feature does not requireany shared storage or expensive storage array hardware, so it represents a low-cost solutionfor organizations looking to increase the availability of their virtualized workloads and ensurethat these workloads can be recovered quickly in the event of a disaster.Hyper-V, together with Failover Clustering, allows VMs to maintain service availability bymoving them between nodes within the datacenter. By contrast, Hyper-V Replica allows VMsto maintain availability across a datacenter where the node hosting the replica is located at a­physically separate site. Hyper-V Replica provides host-based replication that allows for failoverto a secondary datacenter in the event of a disaster. It’s an application-agnostic solution ­becauseit operates at a VM level regardless of what guest operating system or ­applications are installedin the VM. It’s a storage-agnostic solution because you can use any combination of SAN, directattached storage (DAS), or SMB storage for storing your VMs. It also works in both clusteredand nonclustered environments, and you can even replicate from a host on a shared cluster to aremote, stand-alone replica host. And it works with Live ­Migration and Live Storage Migration.Typical cases for using Hyper-V Replica might include:■■ Replicating VMs from head office to branch office or vice versa in large and mid-sizedbusiness environments■■ Replication between two datacenters owned by a hosting provider to provide disasterrecovery services for customers■■ Replication from the premises of small and mid-sized businesses to their hostingprovider’s datacenter
  • 85. 74 CHAPTER 2 Foundation for building your private cloudGuidance on configuring the full life cycle of a replicated VMI’ve written this particular sidebar because our customers often have not done adeep enough planning for their replica scenario. Essentially, my goal is to ­remindthem that replication one way is easy to set up and great. But once you have­recovered a server on your destination, at some point, you may choose to replicateit back to the original location. It is much better to have both ends of replicationenabled as replica servers.When planning your Hyper-V Replica scenario, you should consider the­configuration that properly supports the full life cycle of your replicated VM. Takeinto ­consideration that both endpoints of your replication relationship should beconfigured as replica servers. Your replicated VMs from your main office to yourbranch office is essentially a one-way configuration. If you plan on replicating VMsback to your main office, you need to ensure those Hyper-V servers are configuredas replica servers as well.Consider the step-by-step process you will need to test a replicated VM as part ofa recovery effort. By testing a VM prior to failing it over, you can ensure you havechosen the appropriate recovery point. The replica server will first verify that theoriginal VM is not available before allowing a failover copy to be brought online.Once you have recovered all your VMs, you will also need to consider the stepsrequired to bring the services online. In complex environments, you will likelyneed additional effort to coordinate the order in which you bring VMs and servicesonline. There may be additional Domain Name System (DNS) or firewall changesrequired to fully return service availability.Finally, after you have resolved the failure in the Primary datacenter that requiredyou to bring replicated VMs online, you likely will want to replicate your VMs backto the Primary site. It makes sense to configure both endpoints of your ­replicationto be enabled as a replica server as part of their deployment. For example, if yourprimary site was taken offline by blizzard-related power loss for several days, youmay choose to bring online the VM’s on your replica server. When your Primarydatacenter is back online, you would likely plan to replicate VMs back to it. Ofcourse, you could enable servers as replicas fairly quickly, but proper planningwill minimize disaster-related issues and mistakes, especially as you scale out yourHyper-V infrastructure.Colin RobinsonProgram Manager, Enterprise Engineering Center (EEC)
  • 86. Business continuity for virtualized workloads CHAPTER 2 75Implementing Hyper-V ReplicaHyper-V Replica can be enabled, configured, and managed from either the GUI or by usingWindows PowerShell. Let’s briefly look at how to enable replication of a VM by using ­Hyper-VManager. Begin by selecting the Replication Configuration section in Hyper-V Settings onthe hosts that you plan on replicating VMs to or from. Select the Enable This ComputerAs A ­Replica Server check box to enable the host as a replica server and configure the­authentication, ­authorization, and storage settings that control the replication process:Once you’ve performed this step on both the primary and replica servers (the primaryserver hosts the virtualized production workloads, whereas the replica server hosts the replicaVMs for the primary server), you then can enable replication on a per-VM basis. To do this,right-click a VM in Hyper-V Manager and select Enable Replication, as shown on the followingpage.
  • 87. 76 CHAPTER 2 Foundation for building your private cloudWhen the Enable Replication wizard launches, specify the name of the replica server thatyou want to replicate the selected production VM to:Specify connection parameters that define the port and authentication method used forperforming replication:
  • 88. Business continuity for virtualized workloads CHAPTER 2 77Continue through the wizard until you reach the Choose Initial Replication Method page,where you specify how and when the VM first will be copied over to the replica server:
  • 89. 78 CHAPTER 2 Foundation for building your private cloudOnce you’ve completed the wizard and clicked Finish, replication will begin. You can viewthe replication process as it takes place by selecting the Replication tab in the bottom-centralpane of Hyper-V Manager:You also can use the Measure-VMReplication cmdlet in Windows PowerShell to view thesuccess or failure of the replication process:To view all the Windows PowerShell cmdlets for managing the Hyper-V Replica feature,use the ­Get-Command cmdlet, as shown here:Guidance on configuring the Hyper-V Replica Broker clusterresourceCustomers who have tested Hyper-V Replica in my Enterprise EngineeringCenter (EEC) lab at Microsoft have often been confused by the following issue.­Basically, they successfully install the Hyper-V Replica Broker to the cluster, butthey don’t find it obvious that they also have to configure the broker. This sidebardescribes the necessary steps for that configuration.After configuring your Hyper-V cluster as a Hyper-V Replica server, you will now have anew cluster resource displayed in your Failover Cluster Manager console. The next step isto configure this new cluster resource. If you look down near the ­bottom of the FailoverCluster Manager, you will see that your new cluster resource is listed and is Online.
  • 90. Business continuity for virtualized workloads CHAPTER 2 79Now we have to configure the Replication settings to be used by the cluster. WithinFailover Cluster Manager, highlight the newly created broker, select Resources at the­bottom, and choose Replication Settings. These settings will be configured once here, andall nodes of the cluster will share this replication configuration based on what you do here:The options you configure for the broker (and, in turn, the whole cluster) are exactly likesetting up one server. First, select the Enable This Cluster as a Replica Server check box:
  • 91. 80 CHAPTER 2 Foundation for building your private cloudThe same choices are available for the cluster, such as Authentication And Ports, as wellas Authorization And Storage. Make your desired configuration here and click OK.You are now done with the Host configuration of settings for a Replica cluster.Colin RobinsonProgram Manager, Enterprise Engineering Center (EEC)Learn moreFor more information about Hyper-V Replica, see the article titled “Hyper-V Replica FeatureOverview” in the TechNet Library at more information about Hyper-V Replica scenarios, see the article titled “MaintainingBusiness Continuity of Virtualized Environments with Hyper-V Replica: Scenario Overview” inthe TechNet Library at a detailed look at how Hyper-V Replica works and how to implement it, see Hyper-VReplica Overview” in the TechNet Library, and the list of related resources at the end of thearticle, at
  • 92. Business continuity for virtualized workloads CHAPTER 2 81For information about Windows PowerShell cmdlets for managing Hyper-V Replica,see the post titled “Hyper-V Replica PowerShell CMDLETS” on the blog at information about another kind of business continuity solution for Windows Server2012, see Microsoft Online Backup Service Overview” in the TechNet Library at’s moreIn this short book, we can’t cover every reason why Windows Server 2012 provides theideal foundation for building your cloud solutions, and one thing we haven’t talked aboutyet is security. For example, Windows Server 2012 enables Identity Federation using ­Active­Directory Federation Services (AD FS), which provides a common identity framework ­betweenon-premises and cloud environments. Using AD FS like this provides easier ­access to cloudresources and single sign-on (SSO) for both on-premises and cloud-based ­applications.­Windows Server 2012 also includes support for cross-premises connectivity between­on-premises servers and “servers in the cloud” hosted by IaaS providers. It does this by­providing virtual private network (VPN) site-to-site functionality using the remote accessserver role.More security enhancements in Windows Server 2012 can make it easier for you to buildyour cloud infrastructure, but we need to move on, so let’s just end with a sidebar from acouple of our experts at Microsoft.Embedding security into your private cloud design planOne of the most common misconceptions seen in the industry today is to thinkthat the private cloud obviates security concerns, and therefore, security isn’ta critical design and planning consideration. There are many security challengesspecific to cloud computing. These are based on cloud essential characteristicsdefined by the National Institute of Standards and Technology (NIST). Accordingto the NIST definition of cloud computing, the core characteristics of a cloud areresource pooling, on-demanding self-service, rapid elasticity, broad network access,and measured services.In a private cloud environment, there are important security concerns relatedto each of these essential cloud characteristics, and you need to address those­concerns during the design and planning phase. Otherwise, security won’t beembedded into the project from a foundational perspective. If you don’t integratesecurity into every aspect of your private cloud architecture, you’ll increase thechances that later on, you will find breaches that were not predicted due to the lackof due diligence planning.
  • 93. 82 CHAPTER 2 Foundation for building your private cloudSome cloud architects may think that these essential characteristics of cloud­computing only apply to a public cloud infrastructure; this is not true. Large­enterprises already have network segmentation and different levels of­authentication and authorization according to organizational structure or ­businessunit. When evolving from a physical datacenter to a private cloud, these core­security design points need be in place: segmentation, isolation, and security acrossorganizational and divisional boundaries.During the private cloud design and planning phase, you need to be sure to addressthe following security concerns as they relate to the essential security ­characteristicsof the private cloud.Resource poolingWhen the cloud characteristic under consideration is resource pooling, the securityconcern may be that the consumer (user/tenant) requires that the application issecure and that the data is safe even in catastrophic situations. Possible strategiesfor addressing these concerns can include:■■ Implementing data isolation between tenants■■ Applying Authentication, Authorization, and Access Control (AAA)■■ Using the Role Based Access Control (RBAC) modelOn-demand self-serviceWhen the cloud characteristic under consideration is on-demand self-service­capabilities, the security concern may be control of who has the authority todemand, provision, use, and release services from and back to the shared resourcepool. Possible strategies for addressing these concerns can include:■■ Implementing least privilege and RBAC■■ Implementing a well-documented cleanup process■■ Explicitly addressing how cleanup is accomplished in the SLA you have withprivate-cloud tenantsRapid elasticityWhen the cloud characteristic under consideration is rapid elasticity, the securityconcern may be that rogue applications can execute a Denial of Service (DoS) attackthat may destabilize the datacenter by requesting a large amount of resources.­Possible strategies for addressing these concerns can include:■■ Monitoring resources to alert and prevent such scenarios■■ Implementing policy-based quotas
  • 94. Up next CHAPTER 2 83Broad network accessWhen the cloud characteristic under consideration is broad network access, the­security concern may be that users will have access to private cloud applicationsand data from anywhere, including unprotected devices. Possible strategies for­addressing these concerns can include:■■ Implementing endpoint protection■■ As part of the defense in depth approach, making sure to have a security­awareness training in place that covers this scenario■■ Applying AAAIt is the cloud architect’s responsibility to bring these concerns to the table duringthe planning and design phase of the project.Note: for comprehensive coverage of Microsoft’s Private Cloud Security­architecture, please see the article “A Solution for Private Cloud Security” on­TechNet Wiki at­a-solution-for-private-cloud-security.aspx.Yuri DiogenesSenior Technical Writer, SCD iX Solutions/Foundations Group - SecurityTom ShinderPrincipal Knowledge Engineer, SCD iX Solutions Group – Private Cloud SecurityUp nextThe next chapter will examine how you can use Windows Server 2012 as a highly available,easy-to-manage multi-server platform that provides continuous availability, ensures cost­efficiency, and provides management efficiency for your organization’s move into the cloud.
  • 95. 85C H A P T E R 3Highly available, easy-to-manage multi-server­platform■■ Continuous availability  88■■ Cost efficiency  130■■ Management efficiency  140■■ Up next  157This chapter introduces some new features and capabilities of Windows Server 2012that can help make your IT operations more efficient and cost-effective. Withe­nhancements that help ensure continuous availability and improvements that makeserver ­management more efficient and help drive down costs, Windows Server 2012provides a highly available and easy-to-manage multi-server platform that is ideal forbuilding the infrastructure for your organization’s private cloud.Understanding Microsoft’s high-availability solutionsWindows Server and other Microsoft products offer a wide range ofhigh-availability options, affecting everything from infrastructure toapplications. Here is a brief overview of the different technologies and someguidelines for when to use each of them in order to eliminate every singlepoint of failure, providing the datacenter with continual availability for bothplanned and unplanned downtime.HardwareBefore implementing high availability for servers and services, it is important toensure that the datacenter and physical infrastructure can also maintain availabilitywhen any single component fails or must be taken offline for ­maintenance.The datacenter itself should have backup power sources, such as generators orbatteries, and every server should have redundant power supplies connectedto separate power strips on different circuits.
  • 96. 86 Chapter 3 Highly available, easy-to-manage multi-server ­platformThere should be redundancy throughout the network fabric, including switches,routers, and hardware load balancers. Network interface cards (NICs) shouldbe teamed, and there should be duplicate paths for all networks, including any­connections to the Internet.The storage should use redundant array of independent disks (RAID) ­technologiesto recover from the loss of any disk, and the data should be replicated or ­mirroredto a secondary array. Multipath I/O (MPIO) should be deployed to provide ­multiplecommunication routes to the storage. If Internet Small Computer Systems ­Interface(iSCSI) storage is used, the iSCSI target itself should be clustered to reduce­downtime.Even when every component in the datacenter is highly available, it is important torealize that a natural disaster could take out the entire site, so also consider havinga secondary datacenter for disaster recovery using multisite clustering or­replication.Server infrastructureOnce the datacenter is prepared, it is important to ensure that all critical serverinfrastructure components are highly available. First, make sure that there are­multiple instances of each server role to provide redundancy for all services.Within Active Directory, there are different high-availability options for differentroles. Active Directory supports backup and restore, multisite load balancing, andrecovery of deleted objects through the Active Directory Recycle Bin. Additionally,read-only domain controllers can be deployed in less secure locations or branch­offices. Active Directory Certificate Services (AD CS) supports Failover Clustering,and Active Directory Federation Services (AD FS) supports cross-site replicationand SQL mirroring for its database. Active Directory Lightweight Directory ­Services(ADLDS) also supports cross-site replication, as well as backup and restore. The­Active Directory Rights Management Services (AD RMS) servers use SQL highavailability for its database (either using Failover Clustering or log shipping) andNetwork Load Balancing (NLB) for the licensing server.The Domain Name System (DNS) uses a round-robin algorithm to send clients todifferent DNS servers. This provides simple load balancing by presenting redundantservers.NLB is a software-based solution that provides high availability and scalability bydistributing traffic to multiple redundant servers. It is used for server roles withidentical data on each node that does not regularly change, such as a ­website­hosted on Internet Information Services (IIS). If a node becomes unavailable,they can be redirected automatically to a different server that contains the same­information.
  • 97. Chapter 3 87Failover Clustering is the high-availability solution for most other server roles. Thisis done by interconnecting multiple servers that monitor each other and maintainthe data for the service on shared storage, which is accessible by every node. Theservices and virtual machines (VMs) can move between different servers whileseeing the same information on the storage area network (SAN). Automatic failuredetection and recovery minimizes unplanned downtime due to crashes, and failoverand live migration capabilities reduce or eliminate downtime during plannedmaintenance. Some of the workloads that Failover Clustering is recommended forinclude DFS-Namespace Server, DHCP Server, Distributed Transaction ­Coordinator,Exchange, File Server, Hyper-V, Hyper-V Replica Broker, iSCSI Target Server, iSNSServer, Messaging Queuing, SQL, and WINS. Additionally, Failover Clustering is­extensible, so it is possible to cluster any generic application, script, or service, andadvanced integration is possible for almost any application through writinga ­custom resource dynamic-link library (DLL).Microsoft Hyper-V primarily uses Failover Clustering as its high-availability­solution, but VMs can also maintain service continuity through NLB, replication, orbackup and restore. In Windows Server 2012, the Hyper-V Replica provides in-box­replication of VMs to other Hyper-V hosts in the environment for disaster ­recovery.It is even possible to support Failover Clustering within Hyper-V VMs, which isknown as “guest clustering.” The individual VMs form the different cluster nodes,and applications running inside those VMs can move to different nodes, providinga great high-availability option when doing maintenance of the VM, such as addingmemory or updating the guest operating system.Server applicationsSeveral of the most common enterprise applications have their own high-­availabilitysolutions. Some of them use Failover Clustering, while others have their own­implementations. For server roles that do not have a native solution, remember thatit is always possible to place the application inside a VM that is running on a failovercluster. The Windows Server 2012 Failover Clustering feature of VM ­Monitoringallows the cluster to monitor the health of any service inside a VM, allowing itto ­restart the service, restart the VM, or move the VM to a different node in the­cluster, while alerting the administrator that there is a problem.File servers use traditional Failover Clustering and have been enhanced in WindowsServer 2012 with the Continuously Available File Server technology, which presentsclient access points across multiple nodes. Additionally, there is the DFS-Replicationservice, which copies files to different location, providing redundancy.Microsoft’s IIS web server supports Failover Clustering for the FTP and WWW role,and NLB for most other roles. Additionally, IIS has the Application Request ­Routing
  • 98. 88 Chapter 3 Highly available, easy-to-manage multi-server ­platform(ARR) module, which performs load balancing for Hypertext Transfer Protocol(HTTP) traffic, and the ARR component can be made highly available by using NLB.Microsoft Exchange Server, Microsoft Lync Server, Microsoft SQL Server, and­Microsoft SharePoint Server also have a variety of high-availability options for bothplanned and unplanned downtime. Each of the System Center 2012 componentsalso has a rich high-availability story, not only being made highly available, but alsooffering high-availability features and enhancements to Windows Server FailoverClustering and NLB.Additionally, third-party backup and restore technologies, along with replicationsolutions, should be considered. Backup and restore provides high-availability databy keeping multiple copies of that information that can be recovered when needed;however, some data loss can happen if a failure occurred after the last time the datawas backed up. Replication continually pushes copies of the data to other servers orlocations, so data can be accessed if the primary location becomes unavailable.ConclusionThere are many different high-availability solutions to select, ranging from thehardware to infrastructure roles to server applications to management utilities.Always remember to provide redundancy and eliminate every single point of failure;then it can be possible to have continuous availability for your datacenter and itsservices.Symon PerrimanTechnical EvangelistContinuous availabilityGuaranteeing continuous availability of applications and services is essential in today’s­business world. If users can’t use the applications they need, the productivity of your ­businesswill be affected. And if customers can’t access the services your organization provides,you’ll lose their business. Although previous versions of Windows Server have included featureslike Failover Clustering and NLB that help you ensure the availability of business-critical­applications and services, Windows Server 2012 adds a number of improvements that cangreatly help ensure application uptime and minimize service disruptions.Key availability improvements include enhancements to Failover Clustering such as greaterscalability, simplified updating of cluster nodes, and improved support for guest ­clustering.The new SMB 3.0 Transparent Failover capability lets you perform maintenance on your­cluster nodes without interrupting access to file shares on your cluster. Storage ­Migrationnow allows you to transfer the virtual disks and configuration of VMs to new locationswhile the VMs are still running. Windows NIC Teaming now provides an in-box solution for
  • 99. Continuous availability Chapter 3 89­implementing fault tolerance for the network adapters of your servers. Improvementsto Chkdsk greatly reduce potential downtime caused by file system corruption onmission-­critical servers. Easy conversion between installation options provides increased­flexibility for how you configure servers in your environment, whereas Features On ­Demandlets you install Server Core features from a remote repository instead of the local disk.And DHCP failover improves resiliency by allowing you to ensure continuous availability of­Dynamic Host Configuration Protocol (DHCP) services to clients on your network.In the following sections, we’ll dig deeper into each of these capabilities and features. Andwe’ll continue to benefit from the insights and tips from insiders working at Microsoft andfrom select experts who have worked with Windows Server 2012 during the early stages ofthe product release cycle.Combining host and guest clustering for continuously availableworkloadsAmajor investment area in Windows Server 2012 is the notion of continuousavailability. This refers to the combination of infrastructure capabilities that­enable VMs and workloads to remain online despite failures in compute, network,or storage infrastructure. Designing infrastructure and workloads for ­continuousavailability requires analyzing and providing resiliency for each layer of the­supporting architecture. The physical compute, storage, and network architectureproviding the private cloud fabric is the first area of interest. Next, guest ­clusteringor clustering of the VMs providing the workload functionality is an additional layerof resiliency that can be used. Together, these technologies can be deployed toprovide continuous availability during both planned and unplanned downtime ofthe host infrastructure or the guest infrastructure.At the fabric level or physical infrastructure layer, a Windows Server 2012­infrastructure provides continuous availability technologies for compute, network,and storage. For storage, Windows Server 2012 introduces Storage Spaces, a newtechnology for providing highly available storage using commodity hardware.Using either Storage Spaces or SAN-based storage, Windows Server 2012 alsointroduces Scale-Out File Server Clusters. With Scale-Out File Server Clusters, two ormore clustered file servers use Cluster Shared Volumes Version 2 (CSV2) to enablea single share to be scaled out across file servers, providing very high-speed accessand high availability of the file share. This file share can be used as the storagelocation for VMs because Windows Server 2012 supports storing VMs on SMB 3file shares. With the combination of Storage Spaces, Scale-Out File Clusters, andSMB 3 ­Multi-Channel access, any component of the Windows Server 2012 storageinfrastructure could fail, but access to the file share or VM will be maintained. Thiscombination provides continuous availability of the storage infrastructure.
  • 100. 90 Chapter 3 Highly available, easy-to-manage multi-server ­platformFor the network infrastructure, Windows Server 2012 introduces built-in ­networkadapter teaming that supports load balancing and failover (LBFO) for ­serverswith multiple network adapters. Regardless of brands or speeds of network­adapters in your server, Windows Server 2012 can take those adapters and createa ­network adapter “team.” The team can then be assigned an IP address and willremain ­connected, provided that at least one or more of the network adapters has­connectivity. When more than one network adapter is available in a team, traffic canbe load-balanced across them for higher aggregate throughput. Use of NIC teamingat the host level, combined with redundancy of the switch/routing infrastructure,provides continuous availability of the network infrastructure.For the compute infrastructure, Windows Server 2012 continues to use WindowsFailover Clustering with Hyper-V host clusters. The scalability of Hyper-V hosts andclusters has been increased dramatically, up to 64 nodes per cluster. Host clustersenable the creation of highly available virtual machines (HAVMs). Hyper-V hostclusters use the continuously available storage infrastructure to store the HAVMs.For planned downtime, HAVMs (as well as non-HA VMs) can be live-migrated toanother host with no downtime for the VMs. For unplanned downtime, a VM ismoved to or booted on another node in the cluster automatically. Clusters can beupdated automatically using Cluster Aware Updating, which live-migrates all VMsoff the node to be updated so that there is no downtime during host maintenanceand updating. Together, these technologies enable continuous availability of thecompute and virtualization infrastructure.Although these technologies provide a robust physical infrastructure and­virtualization platform, the key availability requirement is for the workloads­being hosted. A VM may still be running, but its workload may have an error, stop­running, or suffer from some other downtime-causing event. To enable continuousavailability for workloads, Windows Server 2012, like Windows Server 2008 R2, alsosupport guest clustering, or creating a failover cluster consisting of VMs. A commonexample is creating a guest cluster of SQL VMs so that the advanced error detectionand failover of database instances between cluster nodes can be used even whenthe nodes are virtualized. Previously, the only shared storage supported for guest­clusters was iSCSI. With Windows Server 2012, Fibre Channel shared storage forVMs are enabled by the introduction of the virtual Fibre Channel host bus adapter(HBA) for VMs. This feature enables Fibre Channel–based storage to be zoned andpresented directly into VMs. The VMs can then use this as shared storage for guestfailover clusters.The combination of host and guest clustering can provide continuous availabilityof the workload despite the failure of any layer of the architecture. In the case of aSQL guest cluster, if there is a problem in SQL such as a service or other failure, thedatabase instance can fail over to another node in the guest cluster. If one of the
  • 101. Continuous availability Chapter 3 91network connections of the underlying physical host is lost, NIC teaming enablesthe SQL VM to remain accessible. Anti-affinity rules can be configured such that theSQL guest cluster VMs will not all be running on the same physical node; therefore,if a physical node fails, the SQL databases will fail over to another SQL node in theguest cluster running on one of the other nodes in the host cluster. If one of thedisks where the SQL VM or its data is being stored fails, Storage Spaces and theScale-Out File Cluster maintain uninterrupted access to the data.These examples show that with proper design, the combination of host andguest clustering in conjunction with other Windows Server 2012 features like NIC­teaming, enables continuous availability of VMs and their workloads.David ZiembickiSenior Architect, U.S. Public Sector, Microsoft ServicesFailover Clustering enhancementsFailover Clustering is a feature of Windows Server that provides high availability for serverworkloads. File servers, database servers, and application servers are often deployed infailover clusters so that when one node of the cluster fails, the other nodes can continue toprovide services. Failover Clustering also helps ensure workloads can be scaled up and out tomeet the demands of your business.Although the Failover Clustering feature of previous versions of Windows Server ­provideda robust solution for implementing high-availability solutions, this feature has been­significantly enhanced in Windows Server 2012 to provide even greater scalability, fasterfailover, more flexibility in how it can be implemented, and easier management. The sectionsthat follow describe some the key improvements to Failover Clustering found in WindowsServer 2012. Note that some other cluster-aware features, such as concurrent Live Migrationsand ­Hyper-V Replica, were discussed previously in Chapter 2, “Foundation for building yourprivate cloud.”Increased scalabilityFailover Clustering in Windows Server 2012 now provides significantly greater scalability­compared to Windows Server 2008 R2 by enabling you to do the following:■■ Scale out your environment by creating clusters with up to a maximum of 64 nodes,compared to only 16 nodes in the previous version.■■ Scale up your infrastructure by running up to 4,000 VMs per cluster and up to 1,024VMs per node.These scalability enhancements make Windows Server 2012 the platform of choice formeeting the most demanding business needs for high availability.
  • 102. 92 Chapter 3 Highly available, easy-to-manage multi-server ­platformCSV2 and scale-out file serversVersion 1 of Cluster Shared Volumes (CSV) was introduced in Windows Server 2008 R2 to­allow multiple cluster nodes to access the same NTFS-formatted volume simultaneously.A number of improvements have been made to this feature in Windows Server 2012 to makeit easier to configure and use a CSV and to provide increased security and performance.For example, a CSV now appears as a single consistent file namespace called the CSV FileSystem (CSVFS), although the underlying file system technology being used remains NTFS.CSVFS also allows direct I/O for file data access and supports sparse files, which enhancesperformance when creating and copying VMs. From the security standpoint, a significant­enhancement is the ability to use BitLocker Drive Encryption to encrypt both traditionalfailover disks and CSVs. And it’s also easier now to back up and restore a CSV with in-box­support for CSV backups provided by Windows Server Backup. Backups of CSV volumesno longer require redirected I/O in version 2. The volume snapshots can be taken on thehost that currently owns the volume, unlike version 1, where they were taken on the node­requesting the backup. Configuring a CSV can now be performed with a single right-click inthe Storage pane of Failover Cluster Manager.CSV2 also supports the SMB 3.0 features described in the previous chapter, making ­possiblescale-out file servers that can host continuously available and scalable storage. ­Scale-out fileservers are built on top of the Failover Clustering feature of Windows Server 2012 and the SMB3.0 protocol enhancements. Scale-out file servers allow you to scale the capacity of your fileservers upward or downward dynamically as the needs of your business change. This means youcan start with a low-cost solution such as a two-node file server, and then later add additionalnodes (to a maximum of four) without affecting the operation of your file server.Scale-out file servers can be configured by starting the High Availability Wizard fromFailover Cluster Manager. Begin by selecting File Server from the list of cluster roles (formerlycalled clustered services and applications):
  • 103. Continuous availability Chapter 3 93Then, on the next page of the wizard, select the File Server For Scale-Out Application Dataoption, as shown here, and continue through the wizard:When the wizard executes, a series of steps is performed to create the scale-out file server.These steps are summarized in a report that the wizard generates:
  • 104. 94 Chapter 3 Highly available, easy-to-manage multi-server ­platformScale-out file servers have a few limitations that general-use file servers don’t have.­Specifically, scale-out file servers don’t support:■■ File Server Resource Management (FSRM) features like Folder Quotas, File Screening,and File Classification■■ Distributed File Services Replication (DFS-R)■■ NFS■■ Data deduplicationEasier cluster migrationThe Migrate A Cluster Wizard makes it easy to migrate services and applications froma cluster running Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012.The wizard helps you migrate the configuration settings for clustered roles, but it doesn’tmigrate settings of the cluster, network, or storage, so you need to make sure that your newcluster is configured before you use the wizard to initiate the migration process. In addition,if you want to use new storage for the clustered roles you’re migrating, you need to makesure that this storage is available to the destination cluster before running the wizard. Clustermigration also now supports Hyper-V and allows you to export and re-import VMs as part ofthe migration process.Now support is also included for copying the configuration information of multiple VMsfrom one failover cluster to another, making it easier to migrating settings between clusters.And you can migrate configuration information for applications and services on clusters­running Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012.Improved Cluster ValidationCluster validation has been improved in Windows Server 2012 and is much faster than inthe previous version of Failover Clustering. The Validate A Configuration Wizard, shown inFigure 3-1, simplifies the process of validating hardware and software for the servers thatyou want to use in your failover cluster. New validation tests have been added to this ­wizardfor the Hyper-V role and VMs (when the Hyper-V role is installed) and for verification ofCSV ­requirements. And more detailed control is now provided so that you can validate an­explicitly targeted logical unit number (LUN).Simplified cluster managementThe Failover Clustering feature is now fully integrated with the new Server Manager ofWindows Server 2012, making it easier to discover and manage the nodes of a cluster. Forexample, you can update a cluster by right-clicking the cluster name, which in Figure 3-2 hasbeen added to the server group named Group 1.
  • 105. Continuous availability Chapter 3 95FIGURE 3-1  Validating a failover cluster using the Validate A Configuration Wizard.FIGURE 3-2  You can perform cluster-related tasks from the new Server Manager.
  • 106. 96 Chapter 3 Highly available, easy-to-manage multi-server ­platformServer groups simplify the job of managing sets of machines such as the nodes in a cluster.A single-click action can add all the nodes in a cluster to a server group to facilitate remotemulti-server management.The capabilities of the new Server Manager of Windows Server 2012 are described in moredetail later in this chapter.Active Directory integrationFailover Clustering in Windows Server 2012 is more integrated with Active Directory than inprevious versions. For example, support for delegated domain administration is now providedto enable intelligent placement of cluster computer objects in Active Directory. This means,for example, that you can now create cluster computer objects in targeted organizationalunits (OUs) by specifying the distinguished name (DN) of the target OU. And as a secondexample, you could create cluster computer objects by default in the same OUs as the clusternodes. For more information on Failover Clustering integration with Active Directory, see thesidebar “Clustering and Active Directory improvements.”Clustering and Active Directory improvementsIn Windows Server 2012, Failover Clustering is more integrated with ­Active­Directory. There are improvements made based on past experiences that­administrators were running into.One of the big call generators to Microsoft Support is the creation of the Clusteror names within the Cluster. When the creation of the Cluster or the names occurs, itwould only create the Active Directory object in the default Computers container. Inmany domain environments, the default Computers container is locked down ­becausedomain administrators did not want objects created in this container. When this is thecase, you had to pull in a domain administrator to pre-create objects in the OU wherethe object needed to be, set permissions on the object, and do a few other tasks.This tended to be a long, drawn-out process if there were issues because you hadto wait for someone else to fix your problems before you could continue. Now,­Clustering is smarter about where it is going to place objects. When creating aCluster, it will look in the same OU where the cluster node names are located andcreate the Cluster Name in the same OU. So now you no longer need to pre-createthe objects in a separate OU—Cluster is doing it for you.Let’s take this a step further. Say in your domain environment, you wanted to­separate the physical machines (OU called Physical) and the Clustered names(OU called Clusters). This is not a problem because you can pass the OU ­informationduring the creation of the Cluster. When doing this through the Failover ClusterManager interface, you would input the name in this fashion:
  • 107. Continuous availability Chapter 3 97If you wanted to do this in Windows PowerShell, the command would be:New-Cluster –Name "CN=MyCluster,OU=Clusters,DC=Contoso,DC=Com"Another call generator is the accidental deletions of the Virtual Computer Objectfrom Active Directory. When a name comes online, Failover Clustering checks theobjectGUID that it has for it to match it with the one in Active Directory. If this­account is deleted, it would fail to come online. You had to go through a utilitysuch as ADRESTORE.EXE, restore it from the Recycle Bin (if enabled), do an Active­Directory Restore, or simply delete the resource and create it again.This is no longer the case in Windows Server 2012 Failover Clustering because wehave built-in “repair” functionality for just these instances. If the name has beenremoved from Active Directory, the resource will still come online. It will still logan event about the resource so that you are notified. However, it will give you timeso that you can repair the object and not experience downtime. There is a “repair”option you can select where it will go into Active Directory and re-create the objectfor you.Failover Clustering is no longer dependent on a writeable domain controller. Insome environments where perimeter networks are in place, the perimeter networkwill usually contain a Read Only Domain Controller (RODC). Failover Clustering willnow work with those environments because the requirement has been removed.Along those same lines, we can talk about virtualized environments. For many­companies, moving to virtualized environments is proving to be cost effective.However, there were “gotchas.” In some cases, cluster design was not planned toconsider the need for a writable domain controller.So, let’s say you want to virtualize all your domain controllers and make themhighly available by placing them all in a cluster and storing them on CSVs. In theevent that all nodes of the cluster are down, you are placed in a catch-22 ­situation:Cluster ­services and CSVs depend on a writable domain controller for domain­authentication in the beginning, but your virtualized domain controllers needthe cluster services running in order to start. The Cluster Service would not startbecause it could not get to the domain controller, and the domain controller wouldnot start because the Cluster was down!In Windows Server 2012 Failover Clustering, this has changed. The Cluster Servicewill now start using a special internal local account. All other nodes in the ­Clusterwill start and join as it is using this special account. The CSVs would also
  • 108. 98 Chapter 3 Highly available, easy-to-manage multi-server ­platformcome online. It is almost like we have our own hidden domain just for ourselvesto use. ­Because the Cluster Service is started and the CSVs are online, the domain­controllers can start.We have made big strides in the way we integrate in Active Directory, and all of it isfor the better. Cluster administrators spoke, and Microsoft listened.John MarlinSenior Support Escalation EngineerTask Scheduler integrationFailover Clustering in Windows Server 2012 is also integrated into the Task Scheduler, whichallows you to configure tasks you want to run on clusters in three ways:■■ ClusterWide tasks are scheduled to run on all nodes in the cluster.■■ AnyNode tasks are scheduled to run on a single, randomly selected cluster node.■■ ResourceSpecific tasks are scheduled to run only on the cluster node that currentlyowns the specified resource.You can configure clustered tasks by using Windows PowerShell. Table 3-1 lists the ­cmdletsavailable for this purpose. For more information on any of these cmdlets, use Get-Help <cmdlet>.TABLE 3-1  Windows PowerShell Cmdlets for Configuring Clustered TasksWindows PowerShell Cmdlet DescriptionRegister-ClusteredScheduledTask Creating a new clustered scheduled taskUnregister-ClusteredScheduledTask Delete a clustered scheduled taskSet-ClusteredScheduledTask Update existing clustered taskGet-ClusteredScheduledTask Enumerating existing clustered tasksVM priorityEfficient automatic management of clustered VMs and other clustered roles is now possible inWindows Server 2012 by assigning a relative priority to each VM in the cluster. Once this hasbeen configured, the cluster will then automatically manage the VM or other clustered rolebased on its assigned priority.Four possible priorities can be assigned to a clustered VM or clustered role:■■ High■■ Medium (the default)■■ Low■■ No Auto Start
  • 109. Continuous availability Chapter 3 99Assigning priorities to clustered VMs or other clustered roles lets you control both thestart order and placement of each VM or other role in the cluster. For example, VMs that havehigher priority are started before those having lower priority. The benefit of this is to allowyou to ensure that the most important VMs are started first and are running before otherVMs are started. In addition, support for preemption is included so that low-priority VMscan be automatically shut down in order to free up resources so that higher-priority VMs can­successfully start. And although Hyper-V in Windows Server 2012 now supports concurrentLive Migrations, the order in which VMs queued for Live Migration but not yet migrated canalso be determined on the basis of priority.VMs that have higher priority are also placed on appropriate nodes before VMs with lowerpriority. This means, for example, that VMs can be placed on the nodes that have the bestavailable memory resources, with memory requirements being evaluated on a per-VM basis.The result is enhanced failover placement, and this capability is also Non-Uniform MemoryAccess (NUMA)–aware.Figure 3-3 shows Failover Cluster Manager being used to manage a two-node clusterthat has two cluster roles running on it: a scale-out file server and a VM. Right-clicking the­clustered VM and selecting Change Startup Priority allows you to change the priority of theVM from its default Medium setting to High.FIGURE 3-3  Using Failover Cluster Manager to configure the priority of a clustered VM.
  • 110. 100 Chapter 3 Highly available, easy-to-manage multi-server ­platformFailover Clustering placement policies for Hyper-VWindows Server Failover Clustering provides a critical piece of Hyper-V­infrastructure not just for high availability, but also for mobility. A keyconcept of a virtualized or private cloud environment is to abstract workloads fromtheir underlying physical resources, and Failover Clustering enables this by allowingthe movement and placement of VMs between different physical hosts using livemigration with no perceived downtime. There are a few placement best practicesthat can allow you to optimize the cluster for different Hyper-V scenarios.Default failover policyWhen there is a failure of a node, VMs are distributed across the remaining clusternodes. In previous versions of Windows Server, any resource would be ­distributedto the nodes hosting the fewest number of VMs. In Windows Server 2012,­enhancements in this logic have been made to redistribute the VMs based on themost commonly constrained resource, host memory. Each VM is placed on the nodewith the freest memory resources, and the memory requirements are evaluated ona per-VM basis, including checks to see if the VM is NUMA-aware.If a cluster node hosting several VMs crashes, the Cluster Service will find thehighest-priority VM, then look across the remaining nodes to determine whichnode currently has the freest memory. The VM is then started on that node. Thisprocess repeats for all the VMs, from the highest priority to the lowest priority, untilall VMs are placed.VM PriorityIn Windows Server 2012, each VM running on a cluster can be assigned a priority:High, Medium, or Low. This can be used to ensure that the high-priority VMs aregiven preferential treatment for cluster operations. This could be used to ensurethat the organization’s most critical services or key infrastructure roles can comeonline before less important workloads.If a cluster node hosting several VMs crashes, the high-priority VMs will start first,then the medium-priority VMs, then finally the low-priority ones. This same logicwill be applied for other cluster operations, such as multiple live migrations or NodeMaintenance Mode, where the high-priority VMs will always be moved first.Preferred OwnersFrom earlier versions of Windows Server, it has been possible to configurethe ­preference for node failover order for each VM. This can be helpful in an­environment where it is important for certain VMs to stay on certain nodes, such asif there is a primary datacenter where the VMs should usually run (the Preferred
  • 111. Continuous availability Chapter 3 101Owners), and a backup datacenter available for a disaster recovery for the VMs ifthe primary site is unavailable.If a cluster node hosting several VMs crashes, a high-priority VMs will attempt tomove to the first node in the list of Preferred Owners. If that node is not available,then the VM will attempt to move to the second node in the Preferred Owners list.If none of those Preferred Owners are available, then it will move to the first nodethat is on the Possible Owners list.Possible OwnersThe Possible Owners setting for each VM also existed in earlier versions of ­WindowsServer. It enables VMs to move to and start on a cluster node when none of thePreferred Owners are available. This can be used in an environment when VMsshould still run on a host, even when none of the Preferred Owners are available.In a multisite cluster, the nodes at the backup site would be assigned as a PossibleOwner, but not as a Preferred Owner. In this scenario, the VMs would fail over to thesecondary site only when none of the nodes at the primary site (Preferred Owners)are available.If a cluster node hosting several VMs crashes, a high-priority VMs will attempt tomove to the first node in the list of Preferred Owners. If none of those PreferredOwners are available, then it will move to the first node that is on the Possible­Owners list. If the first node in the Possible Owners list is not available, then itwill move to the next node on the list. If none of the nodes in either the PreferredOwners nor Possible Owners lists are available, then the VM will move to any othernode, but remain offline. Depending on Failback policies, the VM can move backto a Preferred Owner or Possible Owner and start as soon as one of those nodesbecomes available.FailbackAnother setting for each VM that continues to be important in Windows Server2012 is the option to move the VM back to Preferred Owners or Possible Owners,starting from the most Preferred Owner. This feature is helpful if you wish to keepcertain VMs on the same hosts, and return those VMs to the host once it recoversfrom a crash.If a cluster node recovers from a crash and rejoins cluster membership, any VMsthat are not running on a Preferred Owner will be notified that this node is nowavailable for placement. Starting with the high-priority VMs that are running on aPossible Owner (or are offline on another node), each VM will determine if this nodeis a better host, then live-migrate (or start) the VM on that Preferred Owner.
  • 112. 102 Chapter 3 Highly available, easy-to-manage multi-server ­platformPersistent ModeOne problem that is often seen in highly virtualized environments is a “boot storm,”which happens when simultaneously starting a large number of VMs. Starting a VMrequires more host resources than standard running operations, so starting a lot ofVMs can sometimes overload the host, affecting its performance, or even causing itto crash (if certain host reserves are not set). As a safety precaution, during failoveror when a node is restarted, the number of VMs that will start simultaneously islimited (High priority first), and the rest will be queued up to start on that node.Even when these VMs are simultaneously starting, they are slightly staggered tohelp spread out the demands on the host. There are still some settings that can beconfigured to avoid these “boot storms.”Persistent Mode was introduced in Windows Server 2008 R2 and provides the­ability to keep a VM on the last host it was deliberately placed on (either by anadministrator or a System Center Virtual Machine Manager placement policy). If anentire cluster crashes, each VM will wait for the node is was previously hosted on tocome online before starting up, still honoring high-priority VMs first. This preventsall of the VMs across the cluster from trying to start up on the first node(s) thatcome online, helping to avoid a “boot storm.” There is a default amount of time thecluster service will wait for the original node to rejoin the cluster. If the node doesnot join within this period, the VM will be placed on the most Preferred Owner,ensuring that the VM will still come online, while having given that new host an­opportunity to start its own VMs.Auto-StartThere may be cases when there are unimportant VMs that should not be started­after a cluster failover or a crash, giving the other VMs an opportunity to fail overand come online quickly. The Auto-Start property has also existed in previousversions of Windows Server, and if it is disabled, the VM will not be automaticallystarted when it is placed on a node.This can be useful in highly virtualized environments when it is important to keephosts and critical infrastructure VMs running, while not worrying about ­constrainingresources or “boot storms” caused by VMs that do not need to be continuallyavailable, yet are still hosted on the cluster. These VMs can be started later by theadministrator or automatically using a script.
  • 113. Continuous availability Chapter 3 103Anti-AffinityThe final placement policy has also existed before Windows Server 2012, but looksat other VMs, rather than the hosts. The cluster property, AntiAffinityClassName(AACN), enables custom tagging of a VM so that different VMs may share or havedifferent AACNs. VMs that share the same AACN will distribute themselves acrossdifferent hosts automatically. This can be useful to separate tenets or VMs with thesame infrastructure roles across different nodes in the cluster. For example, havingall the virtualized DNS servers or guest cluster nodes on the same host would bea single point of failure if that node crashes, so spreading these VMs out across­different hosts helps maintain continual service availability.If there is a cluster with four nodes and four VMs that have the AntiAffinityClass-Name of “blue,” then by default, each node would host one of the “blue” VMs. Ifthere are more “blue” VMs with the same AACN than there are nodes in the cluster,then there will be more than 1 “blue” VM on each node, but they will still distributethemselves as evenly as possible.ConclusionUsing these policies, it is possible to optimize the placement of VMs on a WindowsServer 2012 Failover Cluster. Always remember to configure the priority to the VMsso that high-priority VMs are placed first, and consider how VM placement will lookwhen any one of the nodes becomes unavailable.Symon PerrimanTechnical EvangelistVirtual machine monitoringEnsuring high availability of services running in clustered VMs is important because ­serviceinterruptions can lead to loss of user productivity and customer dissatisfaction. A new­capability of Failover Cluster Manager in Windows Server 2012 is the ability to monitor thehealth of clustered VMs by determining whether business-critical services are running withinVMs running in clustered environments. By enabling the host to recover from service failuresin the guest, the cluster service in the host can take remedial action when necessary in orderto ensure greater uptime for services your users or customers need.You enable this functionality by right-clicking the clustered VM and selecting ConfigureMonitoring from the More Actions menu item, as shown here:
  • 114. 104 Chapter 3 Highly available, easy-to-manage multi-server ­platformYou then select the service or services you want to monitor on the VM, and if the selectedservice fails, the VM can either be restarted or moved to a different cluster node, dependingon how the service restart settings and cluster failover settings have been configured:
  • 115. Continuous availability Chapter 3 105You can also use Windows PowerShell to configure VM monitoring. For example to­configure VM monitoring. For example, to ­configure ­monitoring of the Print Spooler serviceon the VM named SRV-A, you could use this ­command:Add-ClusterVMMonitoredItem -vm SRV-A -service spoolerFor VM monitoring to work, the guest and host must belong to the same domain or todomains that have a trust relationship. In addition, you need to enable the Virtual MachineMonitoring exception in Windows Firewall on the guest:If Windows PowerShell Remoting is enabled in the guest, then you don’t need to ­enable theVirtual Machine Monitoring exception in Windows Firewall when you configure VM ­monitoringusing Windows PowerShell. You can enable Windows PowerShell Remoting by connecting tothe guest, opening the Windows PowerShell console, and running this command:Enable-PSRemotingThen, to configure monitoring of the Print Spooler service on the guest, you would openthe Windows PowerShell console on the host and run these commands:Enter-PSSessionAdd-ClusterVMMonitoredItem -service spoolerExit-PSSession
  • 116. 106 Chapter 3 Highly available, easy-to-manage multi-server ­platformVM monitoring can monitor the health of any NT Service such as the Print Spooler, IIS, oreven a server application like SQL Server. VM monitoring also requires the use of WindowsServer 2012 for both the host and guest operating systems.Node vote weightsThe quorum for a failover cluster is the number of elements that need to be online in ­order forthe cluster to be running. Each element has a “vote,” and the votes of all elements ­determinewhether the cluster should run or cease operations. In the previous version of Failover­Clustering in Windows Server 2008 R2, the quorum could include nodes, but each node wastreated equally and assigned one vote. In Windows Server 2012, however, the ­quorum settingscan be configured so that some nodes in the cluster have votes (their vote has a weight of 1,which is the default), whereas others do not have votes (their vote has a weight of 0).Node vote weights provide flexibility that is particularly useful in multisite clusteringscenarios. By appropriately assigning a weight of 1 or 0 as the vote for each node, you canensure that the primary site has the majority of votes at all times.Note also that a hotfix has been released that allows you to backport this feature to­Windows Server 2008 R2 SP1 failover clusters; see quorumAnother new feature of Failover Clustering in Windows Server 2012 is the ability to changethe quorum dynamically based on the number of nodes currently in active membership in the­cluster. This means that as nodes in a cluster are shut down, the number of votes needed toreach quorum changes instead of remaining the same, as in previous versions of Failover Clustering.Dynamic quorum allows a failover cluster to remain running even when more than half ofthe nodes in the cluster fail. The feature works with the following quorum models:■■ Node Majority■■ Node and Disk Majority■■ Node and File Share MajorityIt does not work, however, with the Disk Only quorum model.Node drainWhen a failover cluster node needs to be taken down for maintenance, the clustered roleshosted on that node first need to be moved to another node in the cluster. Some examples ofthe kind of maintenance you might need to perform on a cluster node might be upgradingthe hardware on the node or applying a service pack.In the previous version of Failover Clustering in Windows Server 2008 R2, taking down anode for maintenance was a manual process that required placing the node into a Pausedstate and then manually moving the applications and services running on the node to­another node on the cluster.
  • 117. Continuous availability Chapter 3 107However, Failover Clustering in Windows Server 2012 now makes performing maintenanceon cluster nodes much easier. A new feature called node drain now lets you automate themoving of clustered roles off from the node scheduled for maintenance onto other nodesrunning on the cluster.Draining a node can be done either manually by a single click in the Failover ClusterManager console (as shown in Figure 3-4), or you can script it with Windows PowerShell forautomation purposes by using the Suspend-ClusterNode cmdlet.FIGURE 3-4  Initiating a node drain to take down a node for maintenance.Initiating the node drain process does the following:1. Puts the node into the Paused state to prevent roles hosted on other nodes from beingmoved to this node2. Sorts the roles on the node according to the priority you’ve assigned them (assigningpriorities to roles is another new feature of Failover Clustering in Windows Server 2012)3. Moves the roles from the node to other nodes in the cluster in order of priority(VMs are live-migrated to other hosts)Once the process is completed, the node is down and is ready for maintenance.Cluster-Aware UpdatingCluster-Aware Updating (CAU) is a new feature of Windows Server 2012 that lets you­automatically apply software updates to the host operating system in clustered servers withlittle or no downtime. CAU thus both simplifies update management of cluster nodes andhelps ensure your cluster remains available at all times.
  • 118. 108 Chapter 3 Highly available, easy-to-manage multi-server ­platformCAU functionality works seamlessly with your Windows Server Update Services (WSUS)infrastructure and is installed automatically on each cluster node. CAU can be managed fromany server that has the Failover Cluster feature installed but does not belong to the clusterwhose nodes you wish to update.As shown previously in Figure 3-2, you can use Server Manager to initiate the processof updating a cluster. Selecting the Update Cluster menu item opens the Cluster-Aware­Updating dialog box and connects to the cluster you selected in Server Manager:You can also open the Cluster-Aware Updating dialog box from Failover Cluster Manager.Clicking the Preview Updates For This Cluster option opens the Preview Updates dialogbox, and clicking Generate Update Preview List in this dialog box downloads a list of the­updates available for nodes in the cluster:
  • 119. Continuous availability Chapter 3 109Closing the Preview Updates dialog box returns you to the Cluster-Aware Updatingdialog box where clicking the Apply Updates To This Cluster option starts the Cluster-Aware­Updating Wizard:Once you’ve walked through the steps of this wizard and clicked Next, the update process­begins. Cluster nodes are then canned to determine which updates they require in thefollowing way:1. Nodes are prioritized according to the number of workloads they have running onthem.2. The node with the fewest workloads is then drained to place it into maintenance mode.This causes the workloads running on the node to be moved automatically to otheractive nodes in the cluster (see the section “Node drain,” earlier in this chapter).3. The Windows Update Agent on this node downloads the necessary updates fromeither Windows Update or from your WSUS server if you have one deployed in yourenvironment.4. Once the node has been successfully updated, the node is resumed and becomes anactive node in the cluster again.5. The process is then repeated on each remaining node in the cluster in turn, ­accordingto priority.CAU employs an updating run profile to store the settings for how exceptions are handled,time boundaries for the update process, and other aspects of the node updating process. You
  • 120. 110 Chapter 3 Highly available, easy-to-manage multi-server ­platformcan configure these settings by clicking the Create Or Modify Updating Run Profile option inthe Cluster-Aware Updating dialog box shown previously. Doing this opens the Updating RunProfile Editor, as shown here:Why CAU?Since Failover Clustering was first introduced back in Microsoft Windows NT 4.0Service Pack 3, there has been an issue with updating the nodes of the cluster.With Windows NT, because we could have only 2 nodes, the problem was relativelyeasy to solve. You could put the individual nodes into separate update groups, or­create a custom batch file or script to move everything off a single node, ­update it,and then repeat on the other side at a later time. As clustering has improved, andwe have added the number of nodes you can have in a cluster, updating gets moreand more complex. With Windows 2008 R2 allowing up to 16 nodes in a cluster,maintaining an update methodology that keeps all resources online as much as­possible in large clusters is cumbersome and replete with possible errors. Thiscontributes to the most common issue I see at customer sites when I am brought into review clusters or troubleshoot what went wrong in a failure. This issue is that thehotfixes or drivers installed are different versions in a cluster.
  • 121. Continuous availability Chapter 3 111The answer to this in Windows Server 2012 is CAU, which allows all nodes inthe cluster to be updated, one at a time, while maintaining the availability of­applications. By having an update process that is aware of all nodes in the clusterand can move the resources around, we are able to maintain availability and stillupdate all nodes of the cluster. This also helps reduce the human error elementwhen ­relying on someone to follow the best practice of moving resources off and­pausing a node—this action is automated in CAU. With CAU, we can coordinate andinstall updates and hotfixes on all nodes, moving the groups around to maintain­availability and still get everything up to date. Because CAU also integrates with­normal Windows updating, you can control what updates are applied using WSUSand only approve the updates that are appropriate for your environment.Matthew WalkerPremier Field EngineerGuest clusteringFailover Clustering of Hyper-V can be implemented in two ways:■■ Host clustering, in which the Failover Clustering feature runs in the parent partitionof the Hyper-V host machines. In this scenario, the VMs running on the hosts are­managed as cluster resources and they can be moved from one host to another toensure availability of the applications and services provided by the VMs.■■ Guest clustering, in which the Failover Clustering feature runs in the guest operatingsystem within VMs. Guest clustering provides high availability for applicationsand ­services hosted within VMs, and it can be implemented either on a single physicalserver (Hyper-V host machine) or across multiple physical servers.Host clustering helps ensure continued availability in the case of hardware failure orwhen you need to apply software updates to the parent partition. Guest clustering, bycontrast, helps maintain availability when a VM needs to be taken down for maintenance.­Implementing guest clustering on top of host clustering can provide the best of both worlds.Guest clustering requires that the guest operating systems running in VMs have direct­access to common shared storage. In previous versions of Windows Server, the only wayto provision such shared storage in a guest clustering scenario was to have iSCSI initiators­running in the guest operating systems so they could connect directly with iSCSI-based­storage. Guest clustering in previous versions of Windows Server did not support using FibreChannel SANs for shared storage. VMs running Windows Server 2008 R2 in a guest ­clusteringscenario can use Microsoft iSCSI Software Target 3.3, which can be downloaded from theMicrosoft Download Center. Figure 3-5 illustrates the typical way guest clustering was­implemented in Windows Server 2008 R2.
  • 122. 112 Chapter 3 Highly available, easy-to-manage multi-server ­platformHost 1Node 1 Node 2Host 2iSCSI StorageTarget 2Target 1FIGURE 3-5  Implementing guest clustering with Failover Clustering in Windows Server 2008 R2 usingiSCSI Software Target.In Windows Server 2012, iSCSI Software Target is now an in-box feature integrated intoFailover Clustering, making it easier to implement guest clustering using shared iSCSI storage.And by starting the High Availability Wizard from the Failover Clustering Manager console,you can add the iSCSI Target Server as a role to your cluster quickly. You can also do this withWindows PowerShell by using the Add-ClusteriSCSITargetServerRole cmdlet.But iSCSI is now no longer your only option as far as shared storage for guest clustering goes.That’s because Windows Server 2012 now includes an in-box Hyper-V Virtual Fibre Channeladapter that allows you to connect directly from within the guest operating system of a VM toLUNs on your Fibre Channel SAN (see Figure 3-6). The new virtual Fibre Channel adapter supportsup to four virtual HBAs assigned to each guest with separate worldwide names (WWNs) assignedto each virtual HBA and N_Port ID Virtualization (NPIV) used to register guest ports on the host.Host 1Node 1 Node 2Host 2Fibre Channel SANLUN 2LUN 1FIGURE 3-6  Failover Clustering in Windows Server 2012 now allows VMs to connect directly to a FibreChannel SAN.
  • 123. Continuous availability Chapter 3 113Configuring Fibre Channel from the guestBefore you configure Fibre Channel as the shared storage for VMs in a guest cluster, makesure that you have HBAs installed in your host machines and connected to your SAN. Then,open the Virtual SAN Manager from the Hyper-V Manager console and click Create to add anew virtual Fibre Channel SAN to each host:Provide a name for your new virtual Fibre Channel and configure it as needed. Then openthe settings for each VM in your guest cluster and select the Add Hardware option to add thevirtual Fibre Channel adapter to the guest operating system of the VM:
  • 124. 114 Chapter 3 Highly available, easy-to-manage multi-server ­platformThen simply select the virtual SAN you created earlier, and once you’re done, each VM inyour guest cluster can use your SAN for shared storage:
  • 125. Continuous availability Chapter 3 115Guest clustering in Windows Server 2012 also supports other new Failover Cluster features,such as CAU, node drain, Storage Live Migration, and much more.Guest clustering vs. VM monitoringGuest clustering in Windows Server 2012 is intended for server applications that you currentlyhave clustered on physical servers. For example, if you currently have Exchange Server or SQLServer deployed on host clusters, you will have the additional option of deploying them onguest clusters (which can themselves be deployed on host clusters) for enhanced availabilitywhen you migrate your infrastructure to Windows Server 2012.VM monitoring by contrast can enhance availability for other server roles in your­environment, such as your print servers. You can also combine VM monitoring with guestclustering for even greater availability.Guest Clustering: key differences between the Windows Server2008 R2, Windows Server 2012, and VMware approachesWhen we speak about clusters, we usually draw a picture of a few serversand a shared disk resource, required to build a cluster. Hence, for certain­applications, like Exchange Server 2010, SQL Server 2012, or System Center 2012,clustering architecture may not require a shared disk resource; there are still plentyof scenarios where shared disks are essential to build a cluster.In Windows Server 2008 R2, Hyper-V doesn’t provide a way to share a singlevirtual hard disk (VHD) or pass-through disk between VMs. It also doesn’t provide­native access to Fibre Channel, so you can’t share a LUN. The only way to buildguest ­clusters in Windows Server 2008 R2 is to use an iSCSI initiator. You can builda ­cluster with up to 16 nodes. You can freely live-migrate guest clusters and use­dynamic memory in that machine.In VMware vSphere, you can add the emulated LSI Logic SAS and Parallel ­controllersto provide a shared VMDK or a LUN to two VMs. No, you can’t create a cluster ofmore than two nodes on vSphere with built-in disk sharing support. Note thatthe usage of vSphere advanced techniques like VMotion or FT are not ­supportedfor guest clusters in VMware environment. The same applies to hosts, with­overcommitted memory.Windows Server 2012 Hyper-V brings synthetic Fibre Channel interface to VMs,building clusters without limitation for the number of nodes. Here, 16-node guestclusters of Windows Server 2008 R2 and 64-node guest clusters of Windows Server2012 come to reality.Alex A. KibkaloArchitect, Microsoft MEA HQ
  • 126. 116 Chapter 3 Highly available, easy-to-manage multi-server ­platformEnhanced Windows PowerShell supportFailover Clustering in Windows Server 2012 also includes enhanced Windows PowerShell­support with the introduction of a number of new cmdlets for managing cluster registrycheckpoints, ­creating scale-out file servers, monitoring health of services running in VMs, andother ­capabilities. Table 3-2 lists some of the new Windows PowerShell cmdlets for FailoverClustering.TABLE 3-2  New Windows PowerShell Cmdlets for Failover ClusteringWindows PowerShell cmdlet PurposeAdd-ClusterCheckpoint Manages cluster registry checkpoints, including cryptographiccheckpointsGet-ClusterCheckpointRemove-ClusterCheckpointAdd-ClusterScaleOutFileServerRole Creates a file server for scale-out application dataAdd-ClusterVMMonitoredItemMonitors the health of services running inside a VMGet-ClusterVMMonitoredItemRemove-ClusterVMMonitoredItemReset-ClusterVMMonitoredStateUpdate-ClusterNetworkNameResource Updates the private properties of a Network Name resourceand sends DNS updatesTest-ClusterResourceFailure Replaces the Fail-ClusterResource cmdletLearn moreFor more information about the various Failover Clustering improvements in Windows Server2012, see the following topics in the TechNet Library:■■ “What’s New in Failover Clustering” at■■ “Failover Clustering Overview” at■■ “Cluster-Aware Updating Overview” at■■ “High-Performance, Continuously Available File Share Storage for Server ApplicationsTechnical Preview” at■■ “iSCSI High-Availability Block Storage Technical Preview” at
  • 127. Continuous availability Chapter 3 117For more information on CAU, download the “Understand and TroubleshootCluster-Aware Updating (CAU) in Windows Server ‘8’ Beta” topic from additional information concerning Failover Clustering improvements in WindowsServer 2012, see the Failover Clustering and Network Load Balancing Team Blog at Transparent FailoverWindows Server 2012 includes the updated version 3.0 of the Server Message Block (SMB)file-sharing protocol. Some of the features of SMB 3.0 were described in the previous chapter.SMB Transparent Failover is a new feature that facilitates performing maintenance of nodesin a clustered file server without interrupting server applications that store data on WindowsServer 2012 file servers. SMB Transparent Failover can also help ensure continuous availabilityby transparently reconnecting to a different cluster node when a failure occurs on one node.For information about other SMB 3.0 features that can help increase reliability, availability,manageability, and high performance for your business-critical applications, see Chapter 2.Learn moreFor more information about SMB Transparent Failover, see the topic “High-Performance,Continuously Available File Share Storage for Server Applications Technical Preview” in theTechNet Library at additional information, see the blog post “SMB 2.2 is now SMB 3.0” on the WindowsServer Blog at migrationStorage migration is a new feature of Hyper-V in Windows Server 2012 that lets you moveall of the files for a VM to a different location while the VM continues running. This meansthat with Hyper-V hosts running Windows Server 2012, it’s no longer necessary to take aVM ­offline when you need to upgrade or replace the underlying physical storage. We brieflylooked at storage migration in Chapter 2 in the context of performing a Live Migration­without shared storage, so here we’ll dig a bit deeper and look at how storage migrationactually works.When you initiate a storage migration for a VM, the following takes place:1. A new VHD or VHDX file is created in the specified destination location (storage­migration works with both VHD and VHDX).2. The VM continues to both read and write to the source VHD, but new write operationsare now mirrored to the destination disk.
  • 128. 118 Chapter 3 Highly available, easy-to-manage multi-server ­platform3. All data is copied from the source disk to the destination disk in a single-pass copy­operation. Writes continue to be mirrored to both disks during this copy operation,and uncopied blocks on the source disk that have been updated through a mirroredwrite are not recopied.4. When the copy operation is finished, the VM switches to using the destination disk.5. Once the VM is successfully using the destination disk, the source disk is deleted andthe storage migration is finished. If any errors occur, the VM can fail back to using thesource disk.Moving a VM from test to production without downtimeAVM that is in a test environment typically lives on a Hyper-V server, usuallynonclustered, and usually not in the best location or on the best hardware.A VM that is in production typically lives on a cluster, on good hardware, and is ina highly managed and monitored datacenter.Moving from one to the other has always involved downtime—until now.Hyper-V on Windows Server 2012 enables some simple tasks that greatly increasethe flexibility of the administrator when it comes to movement and placement ofrunning VMs. Consider this course of events.I create a VM on a testing server, configure it, get signoff, and make the VM readyfor production. With Hyper-V shared nothing Live Migration, I can migrate thatVM to a production cluster node without taking the VM offline. The process willcopy the VHDs using storage migration, and then once storage is copied, ­performa ­traditional live migration between the two computers. The only thing the­computers need is Ethernet connectivity. In the past, this would have required animport/export operation.Now that the VM is running on my node, I need to cluster it. This is a two-stepprocess. First, using storage migration, I can move the VHD of the VM onto my CSVvolume for the cluster. I could also move it to the file share that is providing storagefor the cluster, if I’m using Hyper-V over SMB. Regardless of the configuration, theVHD can be moved to a new location without any downtime in the VM. In the past,this would have taken an import/export of the VM, or, at minimum, shutdown andmanual movement of the VHD file.Finally, I can fire up my Failover Cluster Manager and add the VM as a clusteredobject. Windows Server 2012 lets you add running VMs to a failover cluster withoutneeding to take the VMs offline to do this.
  • 129. Continuous availability Chapter 3 119There you have it: start the VM on the stand-alone test server, move the VM tothe cluster and cluster storage, and finally create the cluster entry for the VM, all­without any downtime required.Corey HynesArchitect, holSystems ( migration of unclustered VMs can be initiated from the Hyper-V Manager consoleby selecting the VM and clicking the Move option. Storage migration of clustered VMs ­cannotbe initiated from the Hyper-V Manager console; the Failover Clustering Manager consolemust be used instead. You can also perform storage migrations with Windows PowerShell byusing the Move-VMStorage cmdlet.Storage Migration: real-world scenariosStorage Migration simply adds greater flexibility on when your VMs can bemoved from one storage volume to another. This becomes critical as we movefrom high-availability clusters to continuously available clusters. This, of course,adds tremendous agility, allowing IT to better respond to changing business­requirements.Let’s consider two kinds of scenarios: out of space and mission-critical workloads.Out of spaceYou just ran out of space on the beautiful shiny storage enclosure you boughtabout 12 months ago. This can happen due to many reasons, but the common onesinclude the following:• Unclear business requirements when the enclosure was acquired• Server sprawl or proliferation, which is a very common problem in most­established virtualization environmentsThat storage enclosure probably has hundreds or thousands of VMs and ­performingthe move during the shrinking IT maintenance windows are simply not realistic.With Live Storage Migration, IT organizations can essentially move the VMs to otherstorage units outside of typical maintenance windows.Mission-critical workloadsThe workload associated with your most mission-critical VMs is skyrocketing. Youbought a new high-performance SAN to host this workload, but you can’t take theVMs down to move them to the new SAN.
  • 130. 120 Chapter 3 Highly available, easy-to-manage multi-server ­platformThis is a common problem in organizations with very high uptime requirements ororganizations with very large databases, where the move to the new storage volumewould simply take too long.Adiy QasrawiConsultant, Microsoft Consulting Services (MCS)Learn moreFor more information about Storage Migration in Windows Server 2012, see the topic­“High-Performance, Continuously Available File Share Storage for Server Applications ­TechnicalPreview” in the TechNet Library at additional information, see the blog post “Windows Server 8 – Truly LiveStorage Migration” on the Team blog of MCS @ Middle East and Africa at see the following posts by Ben Armstrong on his Virtual PC Guy blog:■■ “Doing a Simple Storage Migration with Windows Server 8” at■■ “Using PowerShell to Storage Migrate with Windows Server 8” at■■ “How does Storage Migration actually work?” at■■ “Storage Migration + PowerShell + Windows 8 = Magic” at■■ “Doing an Advanced Storage Migration with Windows 8” at­windows-8.aspx.■■ “Doing an Advanced Storage Migration with Windows 8 in PowerShell” at­storage-migration-with-windows-8-in-powershell.aspx.Windows NIC TeamingWindows NIC Teaming is the name for the new network adapter teaming functionality­included in Windows Server 2012. Network adapter teaming is also known as load ­balancingand failover (LBFO) and enables multiple network adapters on a server to be grouped­together into a team. This has two purposes:■■ To help ensure availability by providing traffic failover in the event of a network­component failure
  • 131. Continuous availability Chapter 3 121■■ To enable aggregation of network bandwidth across multiple network adaptersPreviously, implementing network adapter teaming required using third-party solutionsfrom independent hardware vendors (IHVs). Beginning with Windows Server 2012, however,network adapter teaming is now an in-box solution that works across different NIC hardwaretypes and manufacturers.Windows NIC Teaming supports up to 32 network adapters in a team in three modes:■■ Static Teaming  Also called Generic Teaming and based on IEEE 802.3ad draft v1,this mode is typically supported by server-class Ethernet switches and requires manualconfiguration of the switch and the server to identify which links form the team.■■ Switch Independent  This mode doesn’t require that the team members connect to­different switches; it merely make it possible.■■ LACP  Also called dynamic teaming and based on IEEE 802.1ax, this mode is­supported by most enterprise-class switches and allows automatic creation of a teamusing the Link Aggregation Control Protocol (LACP), which dynamically identifies linksbetween the server and a specific switch. To use this mode, you generally need to­enable LACP manually on the port of the switch.Configuring NIC teamingNIC teaming can be enabled from Server Manager or using Windows PowerShell. For­example, to use Server Manager to enable NIC teaming, you can begin by right-clicking theserver you want to configure and selecting Configure NIC Teaming:
  • 132. 122 Chapter 3 Highly available, easy-to-manage multi-server ­platformIn the NIC Teaming dialog box that opens, select the network adapters you want to team.Then right-click and select Add To New Team:In the New Team dialog box, configure the teaming mode and other settings as desired:
  • 133. Continuous availability Chapter 3 123Clicking OK completes the process and, if successful, the new team will be displayed in theTeams tile of the NIC Teaming dialog box:To configure and manage NIC teaming using Windows PowerShell, use cmdlets such as­New-NetLbfoTeam to add a new team or Get-NetLbfoTeam to display the properties of ateam. The cmdlets for managing NIC teaming are defined in the Windows PowerShell modulenamed NetLbfo, and as Figure 3-7 shows, you can use the Get-Command cmdlet to display allthe cmdlets defined in this module.FIGURE 3-7  Obtaining a list of cmdlets for configuring and managing NIC teaming.
  • 134. 124 Chapter 3 Highly available, easy-to-manage multi-server ­platformLearn moreFor more information about NIC teaming in Windows Server 2012, see the following topics inthe TechNet Library:■■ Network Adapter Teaming Technical Preview at■■ NIC Teaming Overview at additional information, download the white paper titled “Windows Server 2012 NICTeaming (LBFO) Deployment and Management” from improvementsToday’s businesses must be able to manage larger and larger amounts of data. At the sametime, the capacity of hard disk drives has grown significantly, whereas the price of very largedrives has continued to decline. This has posed problems for organizations that have triedto deploy multi-terabyte disk volumes in their environments because of the amount of timeChkdsk takes to analyze and recover from file system corruption when it occurs.In earlier versions of Windows Server, the time taken to analyze a disk volume for ­potentialcorruption was proportional to the number of files on the volume. The result was that forserver volumes containing hundreds of millions of files, it sometimes took many hours (oreven days) for Chkdsk to complete its operations. The volume also had to be taken offline forChkdsk to be run against it.In Windows Server 2012, however, Chkdsk has been redesigned so that the analysis phase,which consumes most of the time it takes Chkdsk to run, now runs online as a backgroundtask. This means that a volume whose file system indicates there may be file corruptioncan remain online instead of needing to be taken offline for analysis. If analysis by Chkdskdetermines that the file system corruption was only a transient event, no further action needbe taken. If Chkdsk finds actual corruption of the file system, the administrator is notified inthe management consoles and via events that the volume needs repair. The suggested repairprocess may require that the volume be remounted, and the server may need to be rebootedto complete the repair process.The result of this redesign of Chkdsk is to reduce the time it takes to analyze and repaira corrupt large disk volume is reduced from hours (or days) to minutes or even seconds.­Additional improvements to NTFS in Windows Server 2012 include enhanced self-healing,which automatically repairs many issues without the need of running Chkdsk. The overallresult of such improvements is to ensure continuous availability even for servers having verylarge disk volumes with hundreds of millions of files stored on them.
  • 135. Continuous availability Chapter 3 125Learn moreFor more information about Chkdsk improvements in Windows Server 2012, see the topic“Multiterabyte Volumes” in the TechNet Library at ­ conversion between installation optionsWindows Server 2008 and Windows Server 2008 R2 offered an alternative installation ­optioncalled Windows Server Core that included only a subset of the server roles, features, andcapabilities found in the full installation option. Server Core included only those servicesand features needed to support common infrastructure roles such as domain controllers,DNS servers, and DHCP servers. By eliminating unnecessary roles and features, and alsomost of the graphical user interface (GUI; the Server Core user interface only presents a­command-line interface), the result is a minimal Windows Server installation that has a smallerdisk footprint, a smaller attack surface, and requires less servicing (fewer software updates)than the full installation.A limitation of how installation options were implemented in Windows Server 2008 andWindows Server 2008 R2 is that you cannot switch an installation between the full and ServerCore options. So if you have a DNS server with a full installation of Windows Server 2008 R2,the only way to change this into a DNS server with a Server Core installation is to reinstall theoperating system on the machine.Starting with Windows Server 2012, however, you can now switch between Server Coreand GUI installations. For example, if you have deployed a GUI installation of Windows Server2012 and you want to remove the GUI management tools and desktop shell to convert itinto a Server Core installation, you can do this easily by running the following Windows­PowerShell ­command:Uninstall-WindowsFeature Server-Gui-Mgmt-Infra -restartWhen you run this command, it first collects data for the system and then starts the­removal process:Once the GUI and management tools and desktop shell have been removed, the serverrestarts, and when you log on, you are presented with the bare-bones Server Core user­interface:
  • 136. 126 Chapter 3 Highly available, easy-to-manage multi-server ­platformThe process can be reversed by running the following command to convert the ServerCore installation back into a GUI one:Install-WindowsFeature Server-Gui-Mgmt-Infra,Server-Gui-Shell –RestartMinimal Server InterfaceIn addition to the Server Core and GUI installation options, Windows Server 2012 can be­configured in a third form called Minimal Server Interface. This form is not available when you installWindows Server 2012, but you can configure it using Server Manager or using ­Windows PowerShell.There are several reasons you may want to configure the Minimal Server Interface. First,it can function as a compatibility option for applications that do not yet support ­Microsoft’srecommended application model but still want some of the benefits of running ServerCore. Second, administrators who are not yet ready to use remote command-line-based­management can install the graphical management tools (the same ones they would installon a Windows client) alongside the Minimal Server Interface or Server Graphical Shell.The Minimal Server Interface is similar to the GUI installation, except that the following arenot installed:■■ Desktop■■ Start screen■■ Windows Explorer■■ Windows Internet Explorer
  • 137. Continuous availability Chapter 3 127However, the following management tools are available on the Minimal Server Interface:■■ Server Manager■■ Microsoft Management Console (MMC) and snap-ins■■ Subset of Control PanelBenefits for organizationsA key benefit of the easy conversion between installation options available in Windows Server2012 is the added flexibility you gain by being able to convert between the GUI and ServerCore installation options. For example, you could deploy your servers with the GUI optionto make them easier to configure. Then you could convert some of them to Server Coreto reduce footprint, enable greater consolidation ratios of VMs, and reduce your servicing­overhead. You can also select the Minimal Server Interface for application compatibility needsor as a compromise for administrators who are not yet ready to administer without a GUI.Learn moreFor more information about easy conversion between installation options in Windows Server2012, see the following topics in the TechNet Library:■■ “Server Core and Full Server Integration Overview” at■■ “Windows Server Installation Options” at■■ “Windows Server 8: Server Applications and the Minimal Server Interface” at see the Server Core Blog on TechNet at servers without the Start menuSo you miss the Start menu, the good old Start menu? Well, if that’s the case,you’re doing it wrong. If you miss the Start menu, it’s probably because you’vebeen running a full Windows desktop on your server and logging on to the consoleof the server to do work. That’s wrong on a few levels. You should not use theconsole, and unless there is a very compelling reason, you should not have a fullWindows desktop on the server.Sounds easy, and Microsoft have been telling us this for years. In reality, though, it’snot that simple. Server Core does not run everything, and there are a lot of customand third-party software packages that need a GUI to be configured. They may not
  • 138. 128 Chapter 3 Highly available, easy-to-manage multi-server ­platformeven support remote management. So this idea of running Server Core everywhereto reduce updates, decrease attack surface, and increase performance is great—it’sjust not always achievable.Enter Windows Server 2012. Windows Server 2012 has gone a long way towardbringing us close to this ideal scenario. Windows Server 2012 introduces a new levelof user interface, which bridges the gap between Server Core and a full desktop. Itallows you to migrate from Server Core to a full desktop and back again.Annoyed by not having a traditional Start menu? Guess what—you don’t need it,and you will never use it. Here is what you should do instead.First of all, install with a full server desktop and configure your drivers, ­hardware,etc. using the full GUI you are used to. When you are done, remove the GUIby ­running the Windows PowerShell command Remove-WindowsFeature­User-Interfaces-Infra. This will take you to a Server Core configuration. You cannow use your remote administration tools as you did in the past, as well as remoteWindows PowerShell.If you find that you need access to an MMC snap-in, or access to the ­entireset of control panel apps, you can raise the server one level by running­Install-WindowsFeature Server-GUI-MGMT-Infra. This gives you full GUI access,­accessible from a command line. You can run MMC.exe and use any snap-in. You canrun any control panel app. You just don’t have Explorer.exe. This should be morethan enough to do any advanced driver configuration (you have Device Manager orconfigure any third-party application). If you do need a full desktop, you can alwaysadd the User-Interfaces-Infra that you removed earlier.Finally, Server Manager has gotten a complete overhaul. There’s more to thisthan can be discussed here, but it further reduces the need for the Start menu.­Personally, once I learned how to navigate the new Server Manager, I found ­myselfconfiguring all my servers with Server-GUI-MGMT-Infra only, starting Server­Manager, and doing all my traditional server management from that location only.The tools menu gives you one-click access to all installed administrative tools.This is not about getting “around” the lack of the traditional Start menu. It’s allabout learning to use the rich new tools that are there. Once you do, you’ll forgetthat the Start menu ever existed.Corey HynesArchitect, holSystems (
  • 139. Continuous availability Chapter 3 129Features On DemandInstallations of previous versions of Windows Server included binaries for all server rolesand features even if some of those roles and features were not installed on the server.For ­example, even if the DNS Server role was not installed on a Windows Server 2008 R2­installation, the system drive of the server still included the binaries needed to install thatrole, should it be needed later.In Windows Server 2012, you can remove the binaries for roles or features that aren’tneeded for your installation. For example, if you won’t be installing the DNS Server role ona particular server, you can remove the binaries for this role from the server’s system drive.Being able to remove binaries used to install roles and features allows you to reduce the­footprint of your servers significantly.Completely removing featuresBinaries of features can be removed by using Windows PowerShell. For example, to­completely ­remove a feature including its binaries from a Windows Server 2012 installation,use the ­Uninstall-WindowsFeature cmdlet.If you later decide you want to install the feature whose binaries you have removed fromthe installation, you can do this by using the Install-WindowsFeature cmdlet. When youuse this cmdlet, you must specify a source where Windows Server 2012 installation files arelocated. To do this, you can either include the Source option to specify a path to a WindowsImaging (WIM) mount point, or you can leave out this option and let Windows use WindowsUpdate as the source location.Learn moreFor more information about Features On Demand in Windows Server 2012, see thetopic “Windows Server Installation Options” in the TechNet Library at Server FailoverDHCP servers are a critical part of the network infrastructure of most organizations.­Therefore, ensuring that a DHCP server is always available to assign IP addresses to hosts onevery subnet is essential.In previous versions of Windows Server, two approaches could be used to ensure theavailability of DHCP servers. First, two DHCP servers can be clustered together using FailoverClustering so that the second server could take over the load should the first one fail. Theproblem with this approach, however, is that clusters are often using shared storage that canbe a single point of failure for the cluster. Providing redundant storage is a solution, but it canadd significant cost to this approach. Configuring a failover cluster is also not a trivial task.
  • 140. 130 Chapter 3 Highly available, easy-to-manage multi-server ­platformThe other approach is to use split scope approach in which 70 percent to 80 percent of theaddresses in each scope are assigned to the primary DHCP server, while the remaining30 percent to 20 percent are assigned to the secondary DHCP server. This way, if a client can’treach the primary server to acquire an address, it can get one from the secondary server.This approach also has problems, however, because it does not provide for continuity of IP­addressing, is prone to possible overlap of scopes due to incorrect manual configuration, andis unusable when the scope is already highly used.The DHCP Server role in Windows Server 2012 solves these problems by providing a thirdapproach to ensuring DHCP server availability. This approach is called DHCP failover, and itenables two DHCP servers to replicate lease information between them. That way, one of theDHCP servers can assume responsibility for providing addresses to all the clients on a subnetwhen the other DHCP server becomes unavailable.Learn moreFor more information about DHCP failover in Windows Server 2012, see the following topicsin the TechNet Library:■■ “Dynamic Host Configuration Protocol (DHCP) overview” at■■ “Step-by-Step: Configure DHCP for Failover” at can also download the “Understand and Troubleshoot DHCP Failover in WindowsServer ‘8’ Beta” from efficiencyKeeping costs in line is an important consideration for many organizations, and WindowsServer 2012 includes new features and enhancements that can help relieve the pressure facedby IT budgets. Features like Hyper-V virtualization, discussed in the previous chapter, alreadyenable businesses to reduce costs by creating private clouds and by virtualizing workloads,applications, and services. And features such as in-box NIC teaming, described earlier in thischapter, can help reduce cost by eliminating the need for purchasing costly, vendor-specificsolutions.The following sections highlight other features of the new platform that can help yourorganization. For example, Storage Spaces lets you store application data on inexpensive fileservers with similar performance to what you’ve come to expect from expensive SAN ­solutions.Thin provisioning and trim allow just-in-time allocations of storage and let you reclaim ­storagewhen it is no longer needed, which enables organizations to use storage infrastructures ina more cost-efficient fashion. And the enhanced Network File System (NFS) functionality­included in Windows Server 2012 lets you save money by running VMware ESX on VMs thatare using Server for NFS as a data store instead of more expensive SAN ­technologies.
  • 141. Cost efficiency Chapter 3 131Storage SpacesSANs are a traditional “heavy iron” technology often used for storing large amounts of data,but they tend to be very expensive to acquire and fairly complex to manage. A new featureof Windows Server 2012 called Storage Spaces is designed to change the storage task for­enterprises by providing an in-box storage virtualization that can use low-cost commoditystorage devices.Storage Spaces is designed to address a simple question: How can you pool togethercommodity storage devices so you can provision storage as you need it? The result is StorageSpaces, and by combining this feature with the new scale-out file server and other capabilitiesof Windows Server 2012, the result is a highly available storage solution that has all the powerand flexibility of a SAN but is considerably cheaper and also easier to manage.Storage Spaces terminologyStorage Spaces can virtualize storage to create what are called storage pools. A storage poolis an aggregation of unallocated space on physical disks installed in or connected to servers.Storage pools are flexible and elastic, allowing you to add or remove disks from the pool asyour demand for storage grows or shrinks.Once you’ve created a storage pool using Storage Spaces, you can provision storagefrom the pool by creating virtual disks. A virtual disk behaves exactly like a physical diskexcept that it can span multiple physical disks within the storage pool. Virtual disks can hostsimple ­volumes or volumes with resiliency (mirroring or parity) to increase the reliability or­performance of the disk. A virtual disk is sometimes called a LUN.Configuring a storage poolConfiguring a storage pool using Storage Spaces requires that you have at least one­unallocated physical disk available (a disk with no volumes on it). If you want to create a­mirrored volume, you’ll need at least two physical disks; a parity volume requires at leastthree physical disks. Pools can consist of a mixture of disks of different types and sizes.Table 3-3 shows the different types of disks supported by Storage Spaces. These disks couldbe installed inside servers on your network or within just-a-bunch-of-disks (JBOD) enclosures.TABLE 3-3  Types of Disks Supported by Storage SpacesType of drive Stand-alone file servers Clustered file serversSATA SupportedSCSI SupportediSCSI Supported SupportedSAS Supported SupportedUSB Supported
  • 142. 132 Chapter 3 Highly available, easy-to-manage multi-server ­platformYou can use Server Manager or Windows PowerShell to configure your storage pools,virtual disks, and volumes. To create a new storage pool using Server Manager, select StoragePools under File And Storage Services. The primordial pool contains unallocated physicaldisks on the servers you are managing.To create a new storage pool, click Tasks in the Storage Pools tile and select NewStorage Pool:The New Storage Pool Wizard is started, and after specifying a name for your new pool,you can select which physical disks you want to include in your pool. We’ll select both ofthe available Serial Attached SCSI (SAS) disks for our pool, with the first disk being used for­storage and the second designated as a “hot spare” disk that Storage Spaces can bring online­automatically if it needs to (for example, if the other two disks fail):
  • 143. Cost efficiency Chapter 3 133On completing the wizard, you have the option of creating a new virtual disk when thewizard closes:
  • 144. 134 Chapter 3 Highly available, easy-to-manage multi-server ­platformThe New Virtual Disk Wizard lets you provision storage from your new pool to createvirtual disks that span one or more physical disks within your pool:After you have selected a pool and specified a name for your new virtual disk, you canchoose whether to create a simple virtual disk or one with resiliency:Next, you will select either fixed or thin as the provisioning type (thin provisioning is­discussed later in this chapter):
  • 145. Cost efficiency Chapter 3 135You’ll also need to specify the size of your new virtual disk. Once you’ve finished­provisioning your new virtual disk, you can create volumes on it using the New VolumeWizard by selecting a server and virtual disk and specifying size, drive letter, and file systemsettings for the volume.Once you’ve finished creating your storage pools, virtual disks, and volumes, you can­manage them using the Storage Pool page of Server Manager:
  • 146. 136 Chapter 3 Highly available, easy-to-manage multi-server ­platformProvisioning and managing storage using Windows PowerShellAlthough the new Server Manager user interface in Windows Server 2012provides a very convenient and intuitive workflow to provision and ­managestorage, interaction with Windows PowerShell is required to access many ofthe advanced ­features afforded by the new Storage Management application­programming interface (API). For example, you can easily create a virtual disk in theuser interface; however, the wizard only allows setting the following parameters:■■ Underlying storage pool name■■ Virtual disk name■■ Resiliency setting (Simple, Mirror, or Parity)■■ Provisioning type (Thin or Fixed)■■ Virtual disk sizeIn contrast, when creating a virtual disk via Windows PowerShell, you can specifyadditional parameters to tune both resiliency and performance:■■ Number of columns: The number of columns the virtual disk contains■■ Number of data copies: Number of complete copies of data that can be­maintained■■ Disk interleave: Number of bytes forming a stripe■■ Physical disks to use: Specific disks to use in the virtual diskFor example, assume that I have an existing pool with the following attributes:■■ Friendly Name: Pool01■■ Disks: nine 450-GB disks (each allocated as Data Store)■■ Pool Capacity: 3.68 TBIf I then create a simple 200-GB virtual disk via the user interface named­VDiskSimpleUI, the resulting virtual disk uses eight columns and maintains one copyof the data. But when creating the virtual disk via Windows PowerShell, I can forcethe ­stripping across all nine of the disks and optimize performance as follows:New-VirtualDisk -StoragePoolFriendlyName Pool01 -ResiliencySettingName Simple -Size 200GB-FriendlyName VDiskSimplePS -ProvisioningType Fixed -NumberOfDataCopies 1 -NumberOfColumns 9And creating a mirrored 200-GB virtual disk via the user interface named­VDiskMirrorUI produces a virtual disk with four columns and two data copies. Butwith Windows PowerShell, I can create a slightly different configuration, increasingthe data protection (and also the disk footprint):New-VirtualDisk -StoragePoolFriendlyName Pool01 -ResiliencySettingName Mirror -Size 200GB-FriendlyName VDiskMirrorPS -ProvisioningType Fixed -NumberOfDataCopies 3 -NumberOfColumns 3
  • 147. Cost efficiency Chapter 3 137The results and differences of these various permutations can be easily displayed viaWindows PowerShell:Get-VirtualDisk | ft FriendlyName, ResiliencySettingName, NumberOfColumns,NumberOfDataCopies, @{Expression={$_.Size / 1GB}; Label=”Size(GB)”}, @{Expression={$_.FootprintOnPool / 1GB}; Label=”PoolFootprint(GB)”} -AutoSizeHere is some output from running this command:FriendlyName ResiliencySettingName NumberOfColumns NumberOfDataCopiesSize(GB) PoolFootprint(GB)------------ --------------------- --------------- -------------------------- -----------------VDiskSimpleUI Simple 8 1200 200VDiskMirrorUI Mirror 4 2200 400VDiskSimplePS Simple 9 1200.25 200.25VDiskMirrorPS Mirror 3 3200.25 600.75Some additional tips:■■ The number of columns multiplied by the number of data copies cannot exceedthe number of disks in the underlying pool.■■ 256 MB of each physical disk is consumed when adding to a pool.■■ Default resiliency settings:• Simple: Striping with no redundancy using a default stripe size of 64 K• Mirror: Two-way mirroring with a 64-K default stripe size• Parity: Striping with parity using a default column width of 3(i.e., three disks per row with two containing data and the other containingparity) and a default stripe size of 64 K■■ Although not enforced, it is recommended that pools with more than 24 disksuse Manual allocation (as opposed to the auto allocation default of Data Store)■■ Clustering tips:• Clustering virtual disks requires the underlying hardware to ­support persis-tent reservations.• Clustered Storage Spaces require fixed provisioning.
  • 148. 138 Chapter 3 Highly available, easy-to-manage multi-server ­platform• Removing a clustered Storage Pool from Failover Clustering will cause theunderlying pool to be marked Read Only.■■ Windows PowerShell links:• New-VirtualDisk:• New-StoragePool: AdamsSenior Program Manager, Enterprise Engineering Center (EEC)Learn moreFor more information about Storage Spaces in Windows Server 2012, see the following topicsin the TechNet Library:■■ “Storage Spaces Overview” at■■ “Storage Management Overview” at■■ “File and Storage Services overview” at can also download the “Understand and Troubleshoot Storage Spaces in WindowsServer ‘8’ Beta” from Provisioning and TrimThin provisioning is a new capability in Windows Server 2012 that integrates with ­supportedstorage technologies, including the built-in Storage Spaces feature to allow just-in-time­allocation of storage. Trim capability complements thin provisioning by enabling the­reclaiming of provisioned storage that is no longer needed.Thin provisioning is designed to address several issues with traditional models for­provisioning storage used by enterprises:■■ The challenges associated with forecasting your organization’s future storage needsmakes it hard to pre-allocate storage capacity to meet changing demand for storage.■■ Pre-allocated storage is often underused, which leads to inefficiencies and unnecessaryexpenditures.■■ Managing an enterprise storage system can often add considerable overhead to theoverall cost of managing your IT infrastructure.
  • 149. Cost efficiency Chapter 3 139The goals of thin provisioning technologies are to address these different needs and­deliver the following business benefits:■■ Maximizing how the organization’s storage assets are used■■ Optimizing capital and operational expenditures for managing storage assets■■ Provisioning storage with high availability, scalability, performance, and resilienceLearn moreFor more information about thin provisioning and trim in Windows Server 2012, see the topic“Thin Provisioning and Trim Storage Technical Overview” in the TechNet Library at ­ can also download a white paper titled “Thin Provisioning in Windows Server 8:Features and Management of LUN Provisioning,” which is available from the WindowsH­ardware Development site on MSDN at for NFS data storeServer for NFS has been enhanced in Windows Server 2012 to support continuous availability.This makes possible new scenarios, such as running VMware ESX VMs from file-based ­storageover the NFS protocol instead of using more expensive SAN storage. This improvementenables Windows Server 2012 to provide continuous availability for VMware VMs, making iteasier for organizations to integrate their VMware infrastructure with the Windows platform.Using Server for NFS as a data store for VMware VMs requires using VMware ESX 4.1. Youalso need a management server with VMware vSphere Client version 4.1 installed. You can useWindows PowerShell to provision and configure shared files on your Server for NFS data store.Learn moreFor more information about Server for NFS Data Store in Windows Server 2012,see the topic “Server for NFS Data Store” in the TechNet Library at­ most robust virtualization solution on the marketCompetitive product analysis is a process that architects and engineers are oftenexpected to participate in. Whether it is for internal strategic decision ­makingor external solution design, the objective remains consistent: to determine what­products accomplish certain goals and include specific features for the least ­possiblecost. Having certain features or overcoming certain business challenges can oftenmake or break the product’s chances of being a part of the solution ­design win, andtherefore they can be detrimental to the survival of the product itself.
  • 150. 140 Chapter 3 Highly available, easy-to-manage multi-server ­platformOne of the most abundant competitive product analyses occurring in the last fiveyears or so has been around server virtualization and the competing softwarevendors (mainly because this discussion scales from small businesses to very largeorganizations). As an architect, I can’t tell you how many times I was pulled intocustomer discussions to talk about the comparisons between Hyper-V and VMWarevSphere (and ultimately tell them which one was better for their company). Theresult of the discussion, prior to today, has often leaned away from the ­Hyper-Vsolution. Customers wanted a more feature-rich solution that could scale into­large-enterprise environments easily.Clearly noticing their need to stay in the game (relating to my first point of ­beingdetrimental to product survival), Microsoft has equalized the game (and even­greatly surpassed its competition in some cases) with the release of ­WindowsServer 2012 and the included version of Hyper-V clustering. Features such as Live­Migration and failover placement have been greatly enhanced while ­componentssuch as VM priorities (which allow granular control of VM importance in theenvironment), Storage Migration, and Hyper-V Replica have been added as­game­-changers in the virtualization world. All of these features, when used­together, help to complete the “continuous availability” puzzle for your VMs.Another important point to note is the significant rework that Microsoft has donewith the management of their clustered environment. With clusters now capableof scaling to 64 nodes encompassing 4,000 VMs, a more streamlined ­management­solution was needed. Improvements to cluster manager that include features such asCAU have been added. CAU allows online and automatic updating of your Hyper­-Vhost machines, while automatically relocating your VMs back and forth. This ­allowsthe administrator to fully update their Hyper-V environment without ­impacting anyrunning services. (Note: This does not include guest operating ­systems.)As you can clearly see by the content of the surrounding chapters, Hyper-V has ­becomethe most robust virtualization solution on the market. With integration already in placein the vast majority of IT organizations, there will be little reason (technical or financial)to be considering any other virtualization solution in the near future.Ted ArcherConsultant, Virtualization and Core InfrastructureManagement efficiencyProvisioning and managing servers efficiently is an essential ingredient for cloud computing.Whether you are a mid-sized organization implementing a dedicated private cloud, a large­enterprise deploying a shared private cloud, or a hoster managing a multitenant public cloud,Windows Server 2012 provides both the platform and the tools for managing your environment.
  • 151. Management efficiency Chapter 3 141The new Server Manager of Windows Server 2012 can simplify the job of managing­multiple remote servers across your organization. Enhancements to Active Directory canmake your Active Directory environment much easier to deploy and manage than with­previous versions of Windows Server. Domain controllers can now be safely cloned in orderto save time when you need to deploy additional capacity, and restoring domain controllersnapshots no longer disrupts your Active Directory environment. Foundational to ­successfulcloud computing is automation, and version 3.0 of Windows PowerShell in Windows Server2012 includes numerous enhancements that extend its capabilities and improve its usefulnessin server administration.The new Server ManagerServer Manager has been redesigned in Windows Server 2012 to facilitate managing ­multipleremote servers from a single administration console. Server Manager uses the ­remote­management capabilities of Windows Management Instrumentation (WMI), ­Windows­PowerShell, and the Distributed Component Object Model (DCOM) for connecting to remoteservers to ­manage them. By default, servers running Windows Server 2012 are enabled forremote ­management, making it easy to provision and configure remote servers using Server­Manager or Windows PowerShell. For example, in previous versions of Windows Server, youneeded either ­physical access to a server or a Remote Desktop connection to the server ifyou wanted to add or remove a role or feature on the server. With Windows Server 2012,­however, you can provision roles and features quickly and easily on remote servers from acentral location by using Server Manager.Server Manager is also included in the Remote Server Administration Tools (RSAT) forWindows 8, which enables administrators to manage their organization’s server infrastructurefrom a client workstation running Windows 8. Server Manager can also be used to manageservers running Windows Server 2008 R2, Windows Server 2008, or Windows Server 2003,provided that remote management has been suitably configured on these systems.Using Server ManagerThe dashboard section of Server Manager shows you the state of your servers at a glance. Thedashboard uses a 10-minute polling cycle so it’s not a live monitoring solution like the SystemCenter Operations Manager, but it does give you a general picture of what’s happening witheach server role in your environment. For example, in the following screenshot, the tile for theDNS role indicates an alert in the Best Practices Analyzer results for the DNS Server role:
  • 152. 142 Chapter 3 Highly available, easy-to-manage multi-server ­platformClicking the alert brings up the details of the alert, indicating a possible problem with theconfiguration of one of the DNS servers in the environment:
  • 153. Management efficiency Chapter 3 143The Local Server section of Server Manager lets you view and configure various settings onyour local server. You can also perform various actions on the local server, or on other serversin the available pool, by using the Manage and Tools menus. For example, you can add newroles or features to a server by selecting Add Roles And Features from the Manage menu:The Select Destination Server page of the new Add Roles And Features Wizard lets youselect either a server from the server pool or an offline VHD as your destination server. Theability to provision roles and features directly to offline VHDs is a new feature of WindowsServer 2012 that helps administrators deploy server workloads in virtualized data centers:
  • 154. 144 Chapter 3 Highly available, easy-to-manage multi-server ­platformThe All Servers section of Server Manager displays the pool of servers available for­management. Right-clicking a server lets you perform different administrative tasks on that server:To populate the server pool, right-click All Servers in Server Management and select AddServer from the shortcut menu. Doing this opens the Add Servers dialog box, which lets yousearch for servers in Active Directory, either by computer name or IP address or by importing atext file containing a list of computer names or IP addresses. Once you’ve found the servers youwant to add to the pool, you can double-click them to add them to the Selected list on the right:
  • 155. Management efficiency Chapter 3 145Servers are often better managed if they are grouped together according to their function,location, or other characteristics. Server Manager lets you create custom groups of serversfrom your server pool so that you can manage them as a group instead of individually. To dothis, select Create Server Group from the Manage menu at the top of Server Manager. Doingthis opens the Create Server Group dialog box, which lets you specify a name for the newserver group and select multiple servers from your server pool to add to the group:Once you’ve added servers to your new group, you can select multiple servers in yourgroup and perform actions on them such as restarting:
  • 156. 146 Chapter 3 Highly available, easy-to-manage multi-server ­platformThe Tools menu at the top of Server Manager can be used to start other managementtools, such as MMC consoles. However, as the new Server Manager of the Windows Serverplatform evolves toward a true multiserver management experience, such single-server MMCconsoles will likely become tightly integrated into Server Manager. With Windows Server2012, such integration is already present for two roles: Remote Desktop Services and file andstorage management. For example, by selecting File And Storage Services, you can managethe file servers, storage pools, volumes, shares, and iSCSI virtual disks in your ­environment:Learn moreFor more information about the new Server Manager, see the following topics in the TechNetLibrary:■■ “Manage multiple, remote servers with Server Manager” at■■ “Remote, Multiserver Management: scenario overview” at■■ “File and Storage Services Overview” at
  • 157. Management efficiency Chapter 3 147Simplified Active Directory administrationActive Directory is foundational to the IT infrastructure of most organizations today, andWindows Server 2012 includes new capabilities and enhancements that help you deploy andmanage your Active Directory environment. Whether you have a traditional datacenter orare migrating to the cloud, the new features and functionality of Active Directory in WindowsServer 2012 will make your job easier.Deploying domain controllersThe process for deploying domain controllers is faster and more flexible in Windows Server2012. The Dcpromo.exe wizard of previous versions of Windows Server has been replacedwith a new Active Directory Domain Services Configuration Wizard that is built upon­Windows PowerShell (see Figure 3-8). This redesign provides a number of benefits. For­example, you can now install the AD DS server role binaries remotely using Server ­Manageror with the new AD DS Windows PowerShell cmdlets. You can also install the binaries on­multiple servers at the same time. Adprep.exe has now been integrated into the Active­Directory installation ­process to make it easier to prepare your existing Active Directory­environment for upgrading to ­Windows Server 2012. And the Active Directory DomainServices Configuration Wizard ­performs validation to ensure that the necessary prerequisiteshave been met before promoting a server to a domain controller.FIGURE 3-8  The Active Directory Domain Services Configuration Wizard replaces Dcpromo.exe and isbuilt upon Windows PowerShell.
  • 158. 148 Chapter 3 Highly available, easy-to-manage multi-server ­platformOf course, everything you can do using the Configuration Wizard can also be done directlyusing Windows PowerShell. Figure 3-9 lists the Windows PowerShell cmdlets available in theADDSDeployment module. These cmdlets can be scripted to automate the deployment andconfiguration of domain controllers within your datacenter or across your private cloud.FIGURE 3-9  The Windows PowerShell cmdlets available in the ADDSDeployment module.Virtualizing domain controllersIn previous versions of Windows Server, virtualizing a domain controller by running it in a VMwas risky. Because of how Active Directory replication works, reverting a virtualized domaincontroller to an earlier state by applying a snapshot could cause Active Directory replicationto fail. Because snapshots are commonly used in Hyper-V environments for performing quickand dirty backups of VMs, accidentally applying a snapshot to a virtualized domain controllercould easily wreck your Active Directory environment.Windows Server 2012 prevents such situations from happening by including a ­mechanismthat safeguards your Active Directory environment if a virtualized domain controller isrolled back in time by using a snapshot. Note that although this now means that snapshotscan be taken and used with virtualized domain controllers, Microsoft still recommends that­snapshots not be used for this purpose.Cloning domain controllersWhen your business grows, you may need to deploy additional domain controllers to meetthe expanding needs of your organization. Being able to rapidly provision new domaincontrollers is important, particularly in cloud environments where elasticity is essential. InWindows Server 2012, you can now safely deploy cloned virtual domain controllers instead ofhaving to go through the time-consuming process of deploying a sysprepped server image,adding the AD DS role, and promoting and configuring the server as a domain controller.All you need to do is export the VM of an existing virtual domain controller or make a copyof its VHD/VHDX file, authorize the exported VM or copied virtual disk for cloning in ActiveDirectory, and create an XML configuration file named DCCloneConfig.xml. Then, once thedestination VM is deployed and has started, the cloned domain controller provisions itself asa new domain controller.
  • 159. Management efficiency Chapter 3 149Cloning virtualized domain controllers like this can make it much easier for you to scaleout your Active Directory environment. For example, if you have a branch office that is rapidlygrowing and has an existing virtualized domain controller on site, you can simply clone thatdomain controller to support the growing needs of your branch office infrastructure.Another scenario where cloning virtualized domain controllers can be useful is helpingensure business continuity. For example, if a disaster happens and you lose some domain­controllers in your organization, you can restore the level of capacity needed quickly by­cloning more domain controllers.Other improvementsThe Active Directory Administrative Center (ADAC) was first introduced in Windows Server2008 R2 as a central management console for Active Directory administrators. ADAC is builton Windows PowerShell and has been enhanced in Windows Server 2012 to provide a richgraphical user interface for managing all aspects of your Active Directory environment(see Figure 3-10).FIGURE 3-10  The Active Directory Administrative Center in Windows Server 2012.
  • 160. 150 Chapter 3 Highly available, easy-to-manage multi-server ­platformA number of improvements have been made to ADAC in Windows Server 2012 to make iteasier to manage your Active Directory infrastructure. For example:■■ The Active Directory Recycle Bin, first introduced in Windows Server 2008 R2, has beenenhanced in Windows Server 2012 with a new GUI to make it easier for you to find andrestore deleted objects.■■ Fine-grained password policies, also first introduced in Windows Server 2008 R2, havebeen enhanced in Windows Server 2012 with a new GUI as well, making it possible toview, sort, and manage all password policies in a given domain.■■ Windows PowerShell History Viewer helps you quickly create Windows PowerShellscripts to automate Active Directory administration tasks by viewing and utilizingthe Windows PowerShell commands underlying any actions performed using theuser ­interface of ADAC. For example, Figure 3-11 shows the Windows PowerShell­commands that were run when ADAC was used to create a new organizational unit forthe marketing department of Contoso.FIGURE 3-11  The Windows PowerShell History Viewer can provide you with commands you can use tocreate your own Windows PowerShell scripts for managing Active Directory.Learn moreFor more information about the new features and enhanced capabilities of Active Directory inWindows Server 2012, see the following topics in the TechNet Library:■■ “What’s new in Active Directory Domain Services (AD DS)” at
  • 161. Management efficiency Chapter 3 151■■ “Easier to Manage and Deploy Active Directory: scenario overview” at■■ “Deploy Active Directory Domain Services (AD DS) in your Enterprise” at■■ “Active Directory Domain Services (AD DS) Virtualization” at■■ “Active Directory Administrative Center Enhancements” at can also download the following guides from the Microsoft Download Center:■■ “Understand and Troubleshoot AD DS Simplified Administration in Windows Server ‘8’Beta” at■■ “Understand and Troubleshoot Virtualized Domain Controller (VDC) in WindowsServer ‘8’ Beta” at additional information on Active Directory improvements in Windows Server 2012, seethe Ask the Directory Services Team blog at PowerShell 3.0PowerShell has become the de facto platform for automating the administration of­Windows-based environments. Built on top of the common language runtime (CLR) and theMicrosoft .NET Framework, Windows PowerShell has brought a whole new paradigm to howcomputers running Windows are configured and managed in enterprise environments.A new version 3.0 of Windows PowerShell is now included in Windows Server 2012.­Windows PowerShell 3.0 is built upon the Windows Management Framework 3.0, whichincludes a new WMI provider model that reduces dependency on COM, a new API for­performing standard Common Information Model (CIM) operations, and the capability ofwriting new Windows PowerShell cmdlets in native code. Windows Management Framework3.0 also includes improvements that make WinRM connections more robust so they can­support long-running tasks and be more ­resilient against transient network failure.Windows PowerShell 3.0 includes many new features that bring added flexibility andpower for ­managing cloud and multiserver environments. Many of these key new capabilitiesare ­discussed next.New cmdletsWindows Server 2012 includes hundreds of new Windows PowerShell cmdlets that help you­manage ­almost every aspect of your private cloud environment. Note that manycmdlets are only available when the appropriate server role or feature is installed.For a complete list of Windows PowerShell modules included with Windows Server 2012, see
  • 162. 152 Chapter 3 Highly available, easy-to-manage multi-server ­platformShow-CommandWindows PowerShell 3.0 includes a new cmdlet called Show-Command that displays a GUI for a­command with a simpler overview of any Windows PowerShell cmdlet. This capability can makeit much easier to understand the syntax of a cmdlet, as opposed to using the ­Get-Help cmdlet.For example, if you want to understand the syntax of the Install-ADDSDomain cmdlet used topromote a server to a domain controller, you can type Show-Command ­Install-ADDSDomainin the Windows PowerShell console to open the dialog box shown in Figure 3-12.FIGURE 3-12  Example of using the Show-Command cmdlet.For more information on the capabilities of the Show-Command cmdlet, see theblog post titled “Running show-command for a cmdlet” on the Windows PowerShell blog at sessionsWindows PowerShell 3.0 now supports persistent user-managed sessions (PSSessions) thatare not dependent upon the session in which they were created. By using the New-PSSessioncmdlet, you can create and save a session on a remote server and then disconnect from thesession. The Windows PowerShell commands in the session on the remote server will thencontinue to execute, even though you are no longer connected to the session. If desired, youcan reconnect later to the session from the same or a different computer.To work with disconnect sessions, you simply do the following:1. Enable remoting.2. Create a PSSession to the remote computer.
  • 163. Management efficiency Chapter 3 1533. Invoke some Windows PowerShell commands on the remote computer.4. Verify the completion of the commands on the remote computer.Windows PowerShell workflowsPowerShell workflows let you write workflows in Windows PowerShell or using ExtensibleApplication Markup Language (XAML) and then run your workflows as if they were WindowsPowerShell cmdlets. This enables Windows PowerShell to use the capabilities of the WindowsWorkflow Foundation to create long-running management activities that can be interrupted,suspended, restarted, repeated, and executed in parallel.Windows PowerShell workflows are especially valuable in cloud computing environments­because they help you automate administrative operations by building in repeatability and by­increasing robustness and reliability. They also help increase your servers-to-­administratorsratio by enabling a single administrator to execute a Windows PowerShell workflow that runs­simultaneously on hundreds of servers.For a detailed discussion of how to construct Windows PowerShell workflows using bothnew ­Windows PowerShell 3.0 syntax and XAML, see the blog post titled “When Windows­PowerShell Met Workflow” on the Windows PowerShell blog at­archive/2012/03/17/when-windows-powershell-met-workflow.aspx.Scheduled JobsWindows PowerShell 2.0 introduced the concept of background jobs, which can be scheduledto run asynchronously in the background. Windows PowerShell 3.0 now includes cmdlets likeStart-Job and Get-Job that can be used to manage these jobs. You can also easily schedulejobs using the Windows Task Scheduler. This means that you, as the administrator, can nowhave full control over when Windows PowerShell scripts execute in your environment.For a detailed look at how you can create and manage background jobs in ­Windows­PowerShell 3.0, see the blog post titled “Scheduling Background Jobs in Windows­PowerShell 3.0” on the Windows PowerShell blog at­archive/2012/03/19/scheduling-background-jobs-in-windows-powershell-3-0.aspx.Windows PowerShell Web AccessWindows PowerShell Web Access lets you manage the servers in your private cloud from­anywhere, at any time, by running Windows PowerShell commands within a web-based­console. Windows PowerShell Web Access acts as a gateway to provide a web-based­Windows PowerShell console that you can use to manage remote computers. This lets yourun Windows PowerShell scripts and commands even on ­computers that don’t have WindowsPowerShell installed. All your computer needs is an Internet ­connection and a web browserthat supports JavaScript and accepts cookies.
  • 164. 154 Chapter 3 Highly available, easy-to-manage multi-server ­platformTo use PowerShell Web Access, begin by installing it using the Add Roles And FeaturesWizard, which you can start from Server Manager:Installing Windows PowerShell Web Access also installs the .NET Framework 4.5 featuresand the Web Server (IIS) server role, if these are not already installed on the server. Youcan also install Windows PowerShell Web Access with Windows PowerShell by using the­Install-WindowsFeature cmdlet.Next, configure Windows PowerShell Web Access on your server. You can do this by­running the Install-PswaWebApplication cmdlet. You’ll need to have already installed a servercertificate on your server. If you are trying this in a test environment, however, you can use aself-signed test certificate, as shown here:
  • 165. Management efficiency Chapter 3 155Once you’ve configured Windows PowerShell Web Access, you need to grant users accessexplicitly by adding authorization rules. You can use the Add-PswaAuthorizationRule cmdletto do this:Administrators can then use Windows PowerShell Web Access to run Windows ­PowerShellscripts and commands remotely on servers that have been authorized to manage by­accessing the gateway from a remote computer. They can do this by opening the URLhttps://<server_name>/pswa in a web browser:For more information on setting up and using PowerShell Web Access, see the topic titled“Deploy Windows PowerShell Web Access” in the TechNet Library at
  • 166. 156 Chapter 3 Highly available, easy-to-manage multi-server ­platformManaging non-Windows systems and devicesYou can now use Windows PowerShell cmdlets to manage any standard-compliantCIM-­capable ­systems, which means you can manage non-Windows servers and even­hardware ­devices using Windows PowerShell just as you manage Windows. For a detailedoverview of this capability, see the blog post titled “Standards-based Management in­Windows Server ‘8’ on the Windows Server Blog at improvementsSome other improvements in Windows PowerShell 3.0 include the following;■■ Delegated administration using RunAs allows commands to be executed using a­delegated set of credentials so that users having limited permissions can run criticaljobs.■■ Improved cmdlet discovery and automatic module loading make it easier to find andrun any cmdlets installed on your computer.■■ Show-Command, a cmdlet and ISE Add-On that helps you quickly find the right­cmdlet, view its parameters in a dialog box, and run the command.■■ Simplified language syntax that make Windows PowerShell commands and scriptsseem a lot less like code and feel more like natural language. For example, the­construct $_. is no longer necessary.■■ The Get-ChildItem cmdlet has new parameters, making it easier to search for files withparticular attributes.■■ Windows PowerShell now automatically loads a module when a cmdlet is run from thatmodule.■■ The Windows PowerShell 3.0 Integrated Scripting Environment (ISE) includes newfeatures that make it easier to code in Windows PowerShell. Examples of these featuresinclude Intellisense, brace-matching, syntax coloring, Most Recently Used list, snippets,and the ISE Script Explorer.■■ With Windows PowerShell 3.0, you are no longer restricted to the help content thatshipped with Windows Server 2012. Help is now published on the web as download-able CAB files.Learn moreFor more information about Windows PowerShell 3.0 in Windows Server 2012, see the­following topics in the TechNet Library:■■ “What’s New in Windows PowerShell 3.0” at­library/hh857339.aspx.
  • 167. Up next Chapter 3 157■■ “Deploy Windows PowerShell Web Access” at■■ “Windows PowerShell Support for Windows Server ‘8’ Beta” athttp://technet.­ additional information on Windows PowerShell 3.0, see the Windows PowerShell Blogat nextThe next chapter will examine how you can use Windows Server 2012 to deploy web­applications on premises and in the cloud so that they are flexible, scalable, and elastic.
  • 168. 159C H A P T E R 4Deploy web applications onpremises and in the cloud■ Scalable and elastic web platform  159■ Support for open standards  186■ Up next  190This chapter examines some of the new features and capabilities of version 8 of theMicrosoft ­Internet Information Services (IIS) web platform in Windows Server 2012.IIS 8 ­provides the foundation for hosting web applications, both on premises and incloud environments, and provides a scalable and elastic platform that fully supportsopen industry standards.Scalable and elastic web platformWeb hosting platforms like IIS are the foundation for cloud computing, and they needboth scalability and elasticity to be effective. A platform has scalability if it allows­additional resources such as processing power, memory, or storage to be provisionedto meet increasing demand. For example, if users of applications running on your webserver farm are complaining about delays and slow performance, you may need to addmore servers to your farm to scale outward. Or you might upgrade your existing serversby adding more memory to scale them upward. Elasticity, on the other hand, means­allowing such additional resources to be provisioned automatically on demand.Whether you are an enterprise hosting line of business (LoB) applications or a cloudhosting provider managing a multi-tenant public cloud, IIS 8 in Windows Server 2012can enhance both the scalability and elasticity of your hosting environment. IIS 8­provides increased scale through improved Secure Sockets Layer (SSL) scalability, ­better­manageability via centralized SSL certificate support, Non-Uniform Memory Access(NUMA)–aware scalability to provide greater performance on cutting-edge hardware,and other new features and enhancements.
  • 169. 160 Chapter 4 Deploy web applications on premises and in the cloudNUMA-aware scalabilityHigh-end server hardware is rapidly evolving. Powerful servers that are too expensive todayfor many smaller businesses to acquire will soon be commonplace.NUMA, which until recently was available only on high-end server hardware, will ­probablybe a standard feature of commodity servers within the next two years. NUMA was ­designedto overcome the scalability limits of the traditional symmetric ­multi-processing (SMP)­architecture, where all memory access happens on the same shared memory bus. SMPworks well when you have a small number of CPUs, but it doesn’t when you have dozens ofthem competing for access to the shared bus. NUMA ­alleviates such bottlenecks by limitinghow many CPUs can be on any one memory bus and connecting them with a high-speed­interconnection.Previous chapters of this book have discussed two other ways that Windows Server 2012can take advantage of the increased scalability possible for NUMA-capable ­hardware: theNUMA-aware placement of virtual machines (VMs) on a failover cluster, and Virtual NUMA,by which the guest operating system of VMs can take advantage of the ­performance­operations of an underlying NUMA-capable host machine. The NUMA-aware scalability of IIS8 means that web application servers running in Windows Server 2012 can now experiencenear-optimal out-of-the-box performance on NUMA hardware.Understanding NUMA-aware scalabilityAsignificant percentage of recent server hardware has NUMA architecture.These machines use multiple bus systems, one for each socket. Each socket hasmultiple CPUs and its own memory. A socket with the attached memory and I/Osystem comprises a NUMA node. Accessing data that is located in a different NUMAnode is more expensive than accessing memory on the local node. When we testedIIS 7.5 on NUMA hardware, we noticed that an increasing number of CPU cores didnot result in increased performance beyond a certain number of cores. In fact, theperformance actually degraded for certain scenarios. This was happening ­becausethe process scheduling is not NUMA-aware, and because of that, the cost of­memory synchronization on NUMA hardware outweighed the benefits of additionalcores. The goal behind the NUMA-Aware Scalability feature is to ensure that IIS 8can take advantage of modern NUMA hardware and provide optimal performanceon servers with a high number of CPU cores.To get the best performance on NUMA hardware for a web workload, a ­HypertextTransfer Protocol (HTTP) request packet should traverse through the fastestI/O path to the CPU. This also means that the packet should be served by a CPUsocket, which is the same I/O hub as the network interface card (NIC) receiving thepacket. This configuration is very specific to hardware architecture, and there is no­programmatic way to know which NIC and sockets are on the same I/O hub.
  • 170. Scalable and elastic web platform Chapter 4 161One of the design goals of this feature is to provide near-optimal settings outof the box without much user configuration. Understanding the finer details ofNUMA ­hardware (for example, the hardware schematic, NIC, and CPU layout) and­configuring it correctly can be pretty difficult and time consuming for average­users. So IIS 8 tries its best to configure all these settings automatically.Automatic configuration is convenient, but it can’t beat optimally tuned ­hardwareperformance. To enable best performance, advanced users can affinitize an IISworker process to most optimal NUMA core(s). This can be done by manuallyconfiguring the smpProcessorAffinityMask attribute in the IIS configuration. Thisprovides something called “hard affinity.” When this configuration is used, theapplication pools are hard-affinitized, meaning that there is no spillover to otherNUMA nodes. More explicitly, the threads cannot be executed by other cores on thesystem, regardless of whether other cores have extra CPU cycles or not.For average users, Windows and IIS make the best attempt at offering­automatic configurations that should yield the best performance. For automatic­configuration, IIS uses something called “soft affinity.” In soft affinity, when a­process is affinitized to a core, the affinitized core is identified as the “preferredcore.” When a thread is about to be scheduled to be executed, the preferred coreis considered first. ­However, depending on the load and the availability of othercores on the system, the thread may be scheduled on other cores on the ­system.In lab tests, it was observed that soft affinity is more forgiving in the case of­misconfiguration compared to hard affinity.When a system has multiple NUMA nodes, Windows uses a simple round-robinalgorithm to assign processes between NUMA nodes to make sure that loads getdistributed equally across nodes. This does not work best for IIS workloads becausethey are usually memory-constrained. IIS is aware of the memory consumptionby each NUMA, so IIS 8.0 will enable another scheduling algorithm for worker­processes started by the Windows Process Activation Service (WAS), which willschedule the processes on the node with the most available memory. This helpsin minimizing access to memory on remote NUMA node. This capability is calledMost Available Memory, and is the default process scheduling algorithm on NUMA­hardware for automatically picking optimal NUMA node for the process.Process scheduling and performance also depends on how IIS workload has beenpartitioned. As explained next, IIS supports two ways of partitioning the workload.Run multiple worker processes in one application pool (that is,a web garden)If you are using this mode, by default, the application pool is configured to run oneworker process. For maximum performance, you should consider running thesame number of worker processes as there are NUMA nodes, so that there is 1:1 ­affinity
  • 171. 162 Chapter 4 Deploy web applications on premises and in the cloudbetween the worker processes and NUMA nodes. This can be done by ­settingthe Maximum Worker Processes application pool setting to 0. In this setting, IIS­determines how many NUMA nodes are available on the hardware and starts thesame number of worker processes.Run multiple applications pools in single workload/siteIn this configuration, the workload/site is divided into multiple application pools.For example, the site may contain several applications that are configured to run inseparate application pools. This configuration effectively results in running ­multipleIIS worker processes for the workload/site, and IIS intelligently distributes and­affinitizes the processes for maximum performance.Harsh Mittal, Senior Program ManagerEok Kim, Software Design EngineerAniello Scotto Di Marco, Software Design Engineer in TestMicrosoft Internet Information Services TeamHow NUMA-aware scalability worksNUMA-aware scalability works by intelligently affinitizing worker processes to NUMA nodes.For example, let’s say that you have a large enterprise web application that you want to deployon an IIS 8 web garden. A web garden is an application pool that uses more than one workerprocess. The number of worker processes used by an application pool can be ­configured inthe Advanced Settings dialog box of an application pool, and as Figure 4-1 shows, the out-of-the-box configuration for IIS is to assign one worker process to each ­application pool.FIGURE 4-1  Configuring a web garden on IIS 8.
  • 172. Scalable and elastic web platform Chapter 4 163By increasing the Maximum Worker Processes setting over its default value of 1, youchange the website associated with your application into a web garden. On ­NUMA-awarehardware, the result is that IIS will try to assign each worker process in the web ­garden to adifferent NUMA node. This manual affinity approach allows IIS 8 to support ­NUMA-capablesystems with more than 64 logical cores. You can also use this approach on NUMA-capablesystems with fewer than 64 logical cores if you want to try and ­custom-tune your workload.On NUMA-capable systems with fewer than 64 logical cores, however, you can simplyset Maximum Worker Processes to 0, in which case IIS will start as many worker processes asthere are NUMA nodes on the system to achieve optimal performance. You might use thisapproach, for example, if you are a multi-tenant cloud hosting provider.Benefits of NUMA-aware scalabilityInternal testing by Microsoft has demonstrated the benefits that enterprises and cloud ­hostingproviders can gain from implementing IIS 8 in their datacenters. For example, in a series of testsusing the default IIS configuration of one worker process per application pool, the numberof requests per second that could be handled by a web application ­actually ­decreased byabout 20 percent as one goes from 32 to 64 cores on systems that are not NUMA-capable­because of increased contention for the shared memory bus on such systems. In similar tests on­NUMA-capable systems, however, the number of requests per second that could be handledincreased by more than 50 percent as one goes from 32 to 64 cores. Such testing confirms theincreased scalability that IIS 8 provides through its NUMA-aware ­capabilities.Learn moreFor more information on NUMA-aware scalability in IIS 8 on Windows Server 2012,see the topic “Web Server (IIS) overview” in the TechNet library at instructions on how to implement NUMA-aware scalability on IIS 8, see the articletitled “IIS 8.0 Multicore Scaling on NUMA Hardware” on IIS.NET at Name IndicationIn previous versions of IIS, you could use host headers to support hosting multiple HTTP­websites using only a single shared IP address. But if you wanted these websites to use­Hypertext Transfer Protocol Secure (HTTPS), then you had a problem because you couldn’tuse host headers. The reason is that host headers are defined at the ­application level of thenetworking stack, so when an incoming HTTPS request containing a host header comes toa web server hosting multiple SSL-encrypted websites, the server can’t read the host headerunless it decrypts the request header first. To decrypt the request header, the server needs touse one of the SSL certificates assigned to the server. Now, typically you have one certificatefor each HTTPS site on the server, but which certificate should the server use to decrypt the
  • 173. 164 Chapter 4 Deploy web applications on premises and in the cloudheader? The one specified by the host header in the incoming request. But the request isencrypted, so you basically have a chicken-and-egg problem.The recommended solution in previous versions of IIS was to assign multiple IP ­addressesto your web server and bind a different IP address to each HTTPS site. By doing this, hostheaders are no longer needed, and IIS can determine which SSL certificate to use to ­decryptan incoming HTTPS request. If your web server hosts hundreds (or even thousands) of­different HTTPS websites, however, this means that you’ll need hundreds or thousandsof different IP addresses assigned to the network adapter of your server. That’s a lot of­management overhead—plus you may not have that many IP addresses available.IIS 8 in Windows Server 2012 solves this problem by providing support for Server NameIndication (SNI), which allows a virtual domain name (another name for a host name) to beused to identify the network end point of an SSL/TSL connection. The result is that IIS cannow host multiple HTTPS websites, each with their own SSL certificate, bound to the sameshared IP address. SNI therefore provides the key benefit of increased scalability for webservers hosting multiple SSL sites, and it can help cloud hosting providers better conserve thedwindling resources of their pool of available IP addresses.Both the server and client need to support SNI, and most newer browsers support SNI aswell. Note, however, that Microsoft Internet Explorer 6 doesn’t support it.Configuring SNISNI can be configured on a per-site basis by editing the bindings for each HTTPS site from theIIS Manager console. Simply select the Require Server Name Indication check box as shownin Figure 4-2 and type a host name for the site, while leaving the IP Address setting as All­Unassigned to use the single shared IP address on the server.FIGURE 4-2  Configuring SNI on an SSL site.
  • 174. Scalable and elastic web platform Chapter 4 165SSL configuration and its order of applicabilitySSL configuration and IIS network binding configuration are actually two separateand completely disconnected configurations on Windows. So when workingon SNI, as well as Centralized SSL Certificate Support, new SSL configurations havebeen introduced.At a high level, there are four SSL binding types, and they are applied in the­following order:Order Syntax Description1 IP:Port ■■ An exact IP:port SSL configuration is found.■■ MY/LM or MY/Web Hosting certificate stores areused.2 Hostname:Port ■■ An exact hostname:port SSL configuration is found.■■ This is the SNI configuration and is applied only ifSSL connection is initiated by an SNI-capable client.■■ MY/LM or MY/Web Hosting certificate stores areused.3 CCS:Port ■■ This is the Centralized SSL Certificate Support (CCS)configuration.■■ In this configuration, a CCS provider is used tolocate the SSL certificate. By default, IIS providesa ­file-based CCS provider.4 [::]:Port ■■ IPv6 wildcard match and the connection must beIPv6.5 ■■ IPv4 wildcard match and the connection can beeither IPv4 or IPv6.For example, consider the following configuration in IIS:<site name="mySNIsite" id="1" serverAutoStart="true"><application path="/" applicationPool="snidemocert0"><virtualDirectory path="/" physicalPath="C:inetpubwwwroot" /></application><bindings><binding protocol="https"bindingInformation="" /></bindings></site>
  • 175. 166 Chapter 4 Deploy web applications on premises and in the cloudWith the following SSL configuration, this code is used:IP:port : Hash : 2114e944c1e63dcdcd033e5d3fdb832ba423a52eHostname:port : Hash : 0e62ac0f4deb8d6d78ac93a3088157e624ee540bIn this example, the first SSL certificate (as referenced by 2114e944c1e63dcdcd-033e5d3fdb832ba423a52e) would be used because the IP:Port ( precedes Hostname:Port ( Yoo, Principal Program ManagerJenny Lawrance, Software Design Engineer IIEok Kim, Software Design Engineer IIAniello Scotto Di Marco, Software Design Engineer in Test IIMicrosoft Internet Information Services TeamLearn moreFor more information on SNI in IIS 8 on Windows Server 2012, see the following topics in theTechNet library:■■ “Web Server (IIS) Overview” at■■ “What’s new in TLS/SSL (Schannel SSP)” at instructions on how to configure SNI in IIS 8, see the article titled “IIS 8.0 Server NameIndication (SNI): SSL Scalability” on IIS.NET at SSL certificate supportCloud hosting providers that need to host multiple HTTPS websites on each server in their webfarms can also benefit from other SSL-related improvements in IIS 8. These ­improvements helpmake the IIS platform more scalable and manageable for hosting secure websites.Managing SSL certificates on servers in web farms running earlier versions of IIS was­time-consuming because the certificates had to be imported into every server in the farm.This made scaling out your farm by deploying additional servers a difficult chore. In ­addition,replicating certificates across servers in a farm was complicated by the need to ensure­manually that certificate versions were in sync.IIS 8 now makes managing SSL certificates on servers in web farms much easier by­introducing a new central certificate store that lets you store all the certificates for your webservers in a file share on the network instead of in the certificate store of each server.
  • 176. Scalable and elastic web platform Chapter 4 167In addition to enhanced SSL manageability, IIS 8 includes significant improvements in thearea of SSL scalability. For example, in previous versions of IIS, the certificate for an HTTPSwebsite is loaded into memory (a process that could take considerable time) upon the first­client accessing the site, and the certificate then remains in memory indefinitely. Hostingonly a few SSL sites on an IIS server, therefore, could lead to large amounts of memory beingwasted for secure sites that were rarely accessed.In IIS 8, however, once a certificate is loaded into memory, it can now be unloaded­automatically after the secure site has been idle for a configurable amount of time. In ­addition,certificates now load into memory almost instantaneously, which eliminates the delay oftenexperienced by clients accessing secure sites for the first time in earlier ­versions of IIS. (Only thecertificates for HTTPS requests are loaded, instead of all the ­certificates.) This change meansthat fewer certificates are kept in memory, which means that more memory is available on theserver for other uses, such as running worker ­processes.These scalability and manageability improvements mean that instead of hosting fewerthan 500 secure sites on a single server, you can now host more than 10,000 SSL sites on oneIIS 8 server. And as the next section discusses, configuring a central store for SSL ­certificatesalso increases the elasticity of your web farms.Configuring a central storeTo configure IIS to use a central store for storing SSL certificates, you first need to add theCentralized SSL Certificate Support feature. You can do this by starting the Add Roles AndFeatures Wizard from Server Manager:
  • 177. 168 Chapter 4 Deploy web applications on premises and in the cloudOnce this feature has been enabled on your server, opening IIS Manager will show a­Centralized Certificates node in the Management section of your server’s configuration­settings:Selecting the Centralized Certificates node and clicking the Open Feature item in the ­Actionspane ­displays a message saying that a central ­certificate’s location has not yet been set:
  • 178. Scalable and elastic web platform Chapter 4 169Clicking the Edit Feature Settings item in the Actions pane opens a dialog box that letsyou enable this feature and configure the path and credentials for the shared folder on thenetwork where SSL ­certificates should be stored:Note that the certificate password is necessary when you have created PFX files with a­password that protects the private key. In addition, all your PFX files in the shared ­certificatestore must use the same password. You cannot have a different password for each PFX file.You can then group your SSL certificates in the Centralized Certificates pane by ­ExpirationDate, or Issued By, to manage them more easily:
  • 179. 170 Chapter 4 Deploy web applications on premises and in the cloudOnce you’ve copied your SSL certificates to the central store, you can configure SSL­websites to use the central store when you add them in IIS Manager:Note that you don’t need to select your certificate by name when you add a new SSLsite in IIS Manager. If you had to do this for each new secure site and you had hundreds ort­housands of certificates in your store, this would make configuring SSL sites too difficult.Instead, you simply make sure that the name of the certificate matches the host header namefor the secure site that uses it. This dynamic configuration of certificates for SSL sites meansthat adding an SSL central store to your web farms makes your farms more elastic.CCS and private key file naming conventionCCS is based on a provider model, so it is definitely possible to use this featurewith other CCS providers. Out of the box, IIS is shipping a file-server-based­provider with a specific naming convention to locate the corresponding SSL­certificate on a file system.The naming convention, loosely, is “<subject name of a certificate>.pfx,” but howdoes the IIS provider deal with wildcard certificates and certificates with multiplesubject names? Let’s consider the following three cases.
  • 180. Scalable and elastic web platform Chapter 4 171Case 1: Certificate with one subject nameThis is simple. If the subject name is, then the IIS provider willsimply look for 2: Wildcard certificateThe IIS provider uses the underscore character (_) as a special character to ­indicatethat it is a wildcard certificate. So, if the subject name in the SSL certificate is­*, the administrator should name the file should be noted that the IIS provider will first try to look for a SSL certificate withthe file name that exactly matches the domain name of the destination site. For­example, if the destination site is, the IIS provider first tries to­locate If that is unsuccessful, then it tries to locate ­ 3: Certificate with multiple subject namesIn this case, the administrator should name the file as many times as there aresubject names. For example, separate SSL certificates may have been issued for and Although the files are exactly the same,there should be two .pfx files: and, it is easy enough to see the relationship between SNI and CCS, especiallywhen it comes to how CCS uses the naming convention based on the host name.­However, it is important to note that CCS does not have a hard dependency on SNI.If the ­administrator wishes to use CCS without relying on SNI, the secure site must be­configured using a dedicated IP address, but the same naming convention can be used.For example, consider the following configuration in IIS:<site name="mySNIsite" id="1" serverAutoStart="true"><application path="/" applicationPool="snidemocert0"><virtualDirectory path="/" physicalPath="C:inetpubwwwroot" /></application><bindings><binding protocol="https"bindingInformation="" /></bindings></site>With the following SSL configuration, this code is used:Central Certificate Store : 443Certificate Hash : (null)
  • 181. 172 Chapter 4 Deploy web applications on premises and in the cloudIn this case, if the client is SNI-capable, then the host name comes from the client asa part of SSL connection initiation. If the client is not SNI-capable, then IIS will lookup the corresponding host name based on the IP address that the client has used toconnect to the server. This is why the IIS configuration has both the IP address andthe host name in this example ( Yoo, Principal Program ManagerEok Kim, Software Design Engineer IIAniello Scotto Di Marco, Software Design Engineer in Test IIMicrosoft Internet Information Services TeamLearn moreFor more information on centralized SSL certificate support in IIS 8 on WindowsServer 2012, see the topic “Centralized Certificates” in the TechNet library at instructions on how to configure centralized SSL certificate support in IIS 8, see thearticle titled “IIS 8.0 Centralized SSL Certificate Support: SSL Scalability and Manageability”on IIS.NET at CPU throttlingManaging CPU resources on farms of web servers in a multi-tenant shared hosting­environment can be challenging. When you are hosting websites and applications frommany different customers, each of them wants to get its fair share of resources. It’s clearly­undesirable when one customer’s site consumes so much CPU resources that other customers’sites are starved of the resources they need to process client requests.IIS CPU throttling is designed to prevent one website from hogging all the processing­resources on the web server. Previous versions of IIS included a rudimentary form of CPUthrottling that basically just turned off a site once the CPU resources being consumed by thesite reached a certain threshold by killing the worker processes associated with the site. Ofcourse, this had the undesirable effect of temporarily preventing clients from ­accessing thesite. As a result, web administrators sometimes used Windows System Resource ­Manager(WSRM) with IIS to control the allocation of processor and memory resources among multiplesites based on business priorities.CPU throttling has been completely redesigned in IIS 8 to provide real CPU ­throttling­instead of just on/off switching. Now you can configure an application pool to throttle theCPU usage so that it cannot consume more CPU processing than a user-specified ­threshold,and the Windows kernel will make sure that the worker process and all child processes stay
  • 182. Scalable and elastic web platform Chapter 4 173below that level. Alternatively, you can configure IIS to throttle an ­application pool when thesystem is under load, which allows your application pool to consume more resources thanyour specified level when the system is idle because the Windows kernel will throttle theworker process and all child processes only when the system comes under load.Configuring CPU throttlingCPU throttling can be configured in IIS 8 at the application pool level. To do this, open theAdvanced Settings dialog box for your application pool in IIS Manager and configure the­settings in the CPU section (see Figure 4-3).FIGURE 4-3  Configuring CPU throttling for an application pool.You can also configure a default CPU throttling value for all application pools on the serverby clicking Set Application Pool Defaults in the Actions pane when the Application Pools nodeis selected in IIS Manager.
  • 183. 174 Chapter 4 Deploy web applications on premises and in the cloudCPU throttling configurationCPU throttling has been included in prior versions of IIS, but for IIS 8.0, it hasreceived a major reworking under the hood.In earlier versions of IIS, a polling mechanism was used to check the CPU ­usage­periodically and take action if it was above the configured threshold for a longenough time. The problem with this approach is that CPU usage wasn’t truly ­limited—it could increase far beyond the configured limit and remain high for a ­period of timebefore the polling mechanism noticed. When the CPU was ­determined to be abovethe threshold, the only “corrective” action available was to kill the IIS worker process(W3pw.exe). When the process was killed, IIS also ­prevented a new ­process frombeing started for the offending application for a period of time so that it would notimmediately come back and take over the CPU again. Any requests to the ­applicationduring that time would fail, resulting in a poor user experience.For IIS 8.0, we worked with the Windows Kernel team to implement true throttlingof CPU usage. In place of the old polling design, the kernel will now ensure that CPUusage stays at the configured level. With this change, we no longer need to kill theW3WP process to halt an offending application, so the application stays active andresponsive to user requests even when it is being throttled.There are two new options for how CPU throttling works in IIS 8.0. The Throttleconfiguration option will keep the CPU near the configured limit at all times. TheThrottleUnderLoad configuration option will keep the CPU near the configured limitwhen there is contention for CPU resources, but it will let it consume more CPUif the server would otherwise be idle. In this model, once other processes need­additional CPU resources, the IIS worker process are throttled to ensure that theother processes get the resources they need.Shaun Eagan, Senior Program ManagerEok Kim, Software Design Engineer IIAniello Scotto Di Marco, Software Design Engineer in Test IIRuslan Yakushev, Software Design Engineer IIMicrosoft Internet Information Services TeamLearn moreFor more information on CPU throttling in IIS 8, see the topic “CPU Throttling: IIS 7 vs IIS 8”in Sean Shaun Blog on IIS.NET at instructions on how to configure CPU throttling in IIS 8, see the article titled“IIS 8.0 CPU Throttling: Sand-boxing Sites and Applications” on IIS.NET at
  • 184. Scalable and elastic web platform Chapter 4 175Application InitializationNothing frustrates users more than trying to open a website in their web browser and thenwaiting for the site to respond. With previous versions of IIS, the delay that occurred whena web application was first accessed was because the application needed to be loaded intomemory before IIS could process the user’s request and return a response. With complexMicrosoft ASP.NET web applications often needing to perform lengthy startup tasks, such asgenerating and caching content, such delays could sometimes reach up to a minute or morein some cases.Such delays are now a thing of the past with the new Application Initialization ­feature ofIIS 8, which lets you configure IIS to spin up web applications so they are ready to ­respondto the first request received. Application pools can be prestarted instead of ­waiting for a firstrequest, and application are initialized when their worker processes start. Administrators candecide which applications should be preloaded on the server.In addition, IIS 8 can be configured to return a static “splash page” or other static ­contentwhile an application is being initialized so the user feels the website being ­accessed isresponding instead of failing to respond. This functionality can be combined with the URLRewrite module to create more complex types of pregenerated static ­content.Application Initialization can be configured at two levels:■■ Machine-wide, in the ApplicationHost.config file for the server■■ Per application, in the Web.config file for the applicationThe Application Initialization role service of the Web Server role must also be added to theserver to use this feature. For more information on configuring ­Application ­Initialization, see thesection “Generating Windows PowerShell scripts using IIS ­Configuration Editor,” later in this chapter.Identifying “fake” requests used by Application InitializationThe Application Initialization feature introduces the concept of a warm-upperiod to IIS. When this feature is configured, the set of URLs specified bythe ­application developer will be sent a “fake” request as part of warming upthe ­application. Once all the fake requests return, the application is considered­initialized, and the warm-up period ends.Depending on your application, you may decide to handle these fake requests­differently than normal requests coming from the wire. If you choose to do this,­using the URL Rewrite module allows you to look at the request headers and­identify the fake requests.Identifying fake requests is easy if you know what to look for. A fake request sentto a URL as part of application-level initialization has the following properties:■■ User Agent = IIS Application Initialization Warm-up
  • 185. 176 Chapter 4 Deploy web applications on premises and in the cloud■■ Server Variables = the WARMUP_REQUEST server variable is setIn addition to application-level initialization, the Application Initialization featurealso allows server administrators to “preload” important applications so that theywill be initialized as soon as the worker process starts. Preload is also done usinga fake request to the root of the application. The Preload fake request has the­following properties:■■ User Agent = IIS Application Initialization Preload■■ Server Variables = the PRELOAD_REQUEST server variable is setYou may also want to perform special handling for normal requests that are­received during the warm-up period. All normal requests received during warm-uphave the APP_WARMING_UP server variable set, which you can use to identify theserequests and handle them as desired.Shaun Eagan, Senior Program ManagerStefan Schackow, Principal Program ManagerJeong Hwan Kim, Software Design Engineer in Test IIAhmed ElSayed, Software Design Engineer in TestMicrosoft Internet Information Services TeamLearn moreFor instructions on how to configure Application Initialization in IIS 8, see the articletitled “IIS 8.0 Application Initialization” on IIS.NET at­iis-80-application-initialization/.See also the article titled “(Re)introducing Application Initialization” in Wade Hilmo’sblog on IIS.NET at­initialization.aspx.Dynamic IP Address RestrictionsWhen a web server receives unwanted activity from malicious clients, it can prevent­legitimate users from accessing websites hosted by the server. One way of dealing withsuch situations in previous versions of IIS was to use static IP filtering to block requests from­specific clients. Static filtering had two limitations, however:■■ It required that you discover the IP address of the offending client and then manuallyconfigure IIS to block that address.■■ There was no choice as to what action IIS would take when it blocked the client—anHTTP 403.6 status message was always returned to the offending client.In IIS 8, however, blocking malicious IP addresses is now much simpler. Dynamic IP AddressRestrictions now provides three kinds of filtering to deal with undesirable request traffic:
  • 186. Scalable and elastic web platform Chapter 4 177■■ Dynamic IP address filtering lets you configure your server to block access for any IPaddress that exceeds a specified number of concurrent requests or exceeds a specifiednumber of requests within a given period of time.■■ You can now configure how IIS responds when it blocks an IP address; for example, byaborting the request instead of returning HTTP 403.6 responses to the client.■■ IP addresses can be blocked not only by client address, but also by addresses receivedin the X-Forwarded-For HTTP header used in proxy mode.Configuring dynamic IP address filteringTo configure dynamic IP address filtering for your server, website, or folder path, selectthe corresponding IP Address And Domain Restrictions node in IIS Manager and click Edit­Dynamic Restriction Settings in the Actions pane. This opens the ­Dynamic IP Restriction­Settings dialog box shown in Figure 4-4, which lets you deny IP ­addresses based on the­number of concurrent requests and/or the number of requests ­received over a specifiedperiod of time.FIGURE 4-4  Configuring dynamic IP address filtering.Once dynamic IP address filtering has been configured, you can configure how IIS­responds to clients whose requests are dynamically filtered. To do this, select the ­appropriateIP Address And Domain Restrictions node in IIS Manager and click Edit Feature Settings inthe Actions pane. Doing this opens the Edit IP And Domain Restriction Settings dialog boxshown in Figure 4-5, which lets you specify the type of response and whether to enforcesuch responses when the incoming request passes through a proxy, such as a firewall or load­balancer, that changes the source IP address of the request.
  • 187. 178 Chapter 4 Deploy web applications on premises and in the cloudFIGURE 4-5  Configuring the response behavior to dynamically filtered requests, including when a proxyis encountered along the request path.Dynamic IP restrictionsPrevious versions of IIS have a Static IP Restrictions feature, which allows serveradministrators to block IP addresses that are exhibiting undesirable behavior.When an HTTP request is made from an IP address that had been blocked, IIS willreturn an HTTP 403 Access Forbidden status. That being said, Static IP ­Restrictionsare a manual process—server administrators are required to perform forensic­analysis of their IIS logs to discover these behavioral patterns and add the offendingIP addresses to their list of static IP restrictions.The goal behind the Dynamic IP Restrictions feature is to dynamically detecttwo specific forms of potentially malicious behavior and temporarily block HTTPrequests from the IP addresses where those requests originated. The two forms ofbehavior that IIS detects are having too many simultaneous connections from aspecific client IP address, and having too many connections from a specific client IPaddress within a specific period of time.In IIS 8, server administrators can configure the behavior that IIS will use when it blocksHTTP requests for both the Static IP Restrictions and Dynamic IP ­Restrictions features;this is an important change from the behavior in previous versions of IIS, which alwaysreturned an HTTP 403 Access Forbidden status message. Server ­administrators can nowconfigure IIS 8 to return HTTP 401 Access Denied, HTTP 403 Access Forbidden, HTTP404 Not Found, or abort the request entirely. For each of these HTTP statuses, IIS willmark the requests with a substatus code that signifies why the request was blocked.IIS can also be configured to simply log the ­behavior, in which case the requests will­succeed or fail based on the nature of an HTTP request, but IIS will still mark these­requests with a substatus code that indicates that the request would have beenblocked. These substatus codes make it easier for server administrators to ­forensicallyexamine their IIS activity logs to identify potentially malicious activity from specific IPaddresses and then add those IP ­addresses to the list of denied static IP addresses.
  • 188. Scalable and elastic web platform Chapter 4 179The following table lists the substatuses that IIS 8 adds:Dynamic IP Restrictions501 Deny by concurrent requests limit502 Deny by requests over time limitStatic IP Restrictions503 Deny by IP address match504 Deny by hostname matchFor example, if you configured IIS to return an HTTP 404 Not Found status for theDynamic IP Restrictions feature and IIS blocks an HTTP request because of too manyconcurrent connections, IIS will write an HTTP 404.501 status message in the IISactivity logs. Alternatively, if you configured the Dynamic IP Restrictions feature toonly log the activity, IIS would write an HTTP 200.501 status in the IIS activity logs.When a server that is running IIS is located behind a firewall or load-balancing­server, the client IP addresses for all the HTTP requests may appear to be fromthe firewall or load-balancing server. Because of this scenario, the IP Restrictions­features in IIS 8 can be configured to operate in Proxy mode. In this mode, IIS willexamine the values in the X-Forwarded-For HTTP header, and determine the clientIP from the list of IP addresses for which the HTTP request was forwarded. By way ofexplanation, the X-Forwarded-For HTTP header is an accepted standard within theInternet community, whereby each server in the chain between an Internet clientand server will append its IP address to the end of the header and separated by acomma. For example, if an HTTP request from an Internet client must travel throughtwo firewall servers to reach the server, there should be three IP addresses in theX-Forwarded-For header: the client’s IP address, followed by the two IP addresses ofthe firewall servers, as illustrated in the following example HTTP request:GET / HTTP/1.1Host: example.comAccept: */*X-Forwarded-For:,, 172.16. 19.84When IIS examines the X-Forwarded-For HTTP header in an HTTP request like thepreceding example, IIS will block the originating client’s IP address ( of the IP address of the firewall server (172.16. 19.84).Robert McMurray, Program ManagerJenny Lawrance, Software Design EngineerWade Hilmo, Principal Development LeadAhmed ElSayed, Software Design Engineer in TestMicrosoft Internet Information Services Team
  • 189. 180 Chapter 4 Deploy web applications on premises and in the cloudLearn moreFor more information on Dynamic IP Address Restrictions in IIS 8, see the topic “IP Addressand Domain Restrictions” in the TechNet library at instructions on how to configure Dynamic IP Address Restrictions in IIS 8, seethe ­article titled “IIS 8.0 Dynamic IP Address Restrictions” on IIS.NET at Logon Attempt RestrictionsBrute-force attacks can create a Denial-of-Service (DoS) condition that can ­prevent ­legitimateusers from accessing an FTP server. To prevent this from happening, IIS 8 ­includes a new ­featurecalled FTP Logon Attempt Restrictions that lets you block ­offending users from l­ogging on to anIIS FTP server for a specified period of time. Unlike the ­Dynamic IP Address ­Restrictions ­describedin the previous section, which blacklists any client whose IP address violates the ­configured­dynamic IP address filtering settings, FTP Logon Attempt ­Restrictions uses a “­graylisting” ­approachthat denies only the ­offending user for a certain period of time. However, by ­configuring this timeperiod to be slightly more than that specified by your domain account lockout policy, you canprevent ­malicious users from locking legitimate users out of accessing your FTP server.Understanding FTP logon attempt restrictionsRunning an FTP service on an Internet-facing server has unfortunately yielded an­additional surface area for attack for server administrators to manage. ­Because hackerscan connect to an FTP service with a wide array of publicly available or ­special-purposeFTP clients, an FTP server offers a way for hackers to continuously send requests to guess ausername/password combination and gain access to an account on a server.This situation has required server administrators to implement additional securitymeasures to counter this behavior; for example, server administrators should alwaysdisable or rename well-known accounts like the Administrator or Guest accounts.­Administrators should also implement policies that enforce strong passwords,­password expiration, and password lockouts. An unfortunate downside to passwordlockouts is that a valid account can be locked out by a hacker who is attempting togain access to the account; this may require the server administrator to re-enable­accounts that have been locked out as a result of good password management practices.From an FTP 7 perspective, there are additional measures that server ­administratorscan implement; for example, administrators can deny well-known accounts atthe global level for their FTP server. In addition, administrators can use one ofthe ­alternate built-in authentication providers instead of FTP’s Basic Authentication­provider. For example, you can use the ASP.NET Membership Authentication ­provider;
  • 190. Scalable and elastic web platform Chapter 4 181by using this provider, if an account was successfully hacked, that account will have noaccess to the actual server because it exists only in the ASP.NET Membership database.In FTP 8, an extra layer of security was added that is called FTP Logon Attempt­Restrictions; this feature provides an additional password lockout policy that is­specific to the FTP service. Server administrators can use this feature of the FTPserver to configure the maximum number of logon attempts that are allowed withina specific time period; once the number of logon attempts has been reached, theFTP service will disconnect the FTP session, and it will block the IP address of the­client from connecting until the time period has passed.Server administrators can configure the FTP Logon Attempt Restrictions feature incombination with their password lockout policies to configure a secure ­environmentfor their network, which allows uninterrupted functionality for valid users. Forexample, if you configured your FTP 8 server for a maximum of four failed logonattempts, you could configure your password lockout policy for a maximum of fivefailed logon attempts. In this way, a malicious FTP client would be blocked onceit reached four failed logon attempts, and yet the valid user would still be able to­access the account if he or she attempted to log on during the time period wherethe attacker was blocked.Robert McMurray, Program ManagerEok Kim, Software Design EngineerAniello Scotto Di Marco, Software Design Engineer in TestMicrosoft Internet Information Services TeamConfiguring FTP Logon Attempt RestrictionsTo configure FTP Logon Attempt Restrictions for FTP sites on your server, select the FTP­Logon Attempt Restrictions node for your server in IIS Manager and click the Open Featureitem in the Actions pane. This displays the settings shown in Figure 4-6, which let you enablethe feature and specify a maximum number of failed logon attempts within a given amountof time. Alternatively, you can enable this ­feature in logging-only mode to collect data­concerning possible brute-force password ­attacks being conducted against your server.Library Cards and FTP ServersIt’s true. I have a library card. I know there are a million other ways to get ­information—that Internet thingy, book downloads, having Amazon deliver stuff in boxes to my door.But the library is pretty reliable and generally easy to use—even if it’s not­cutting-edge. The main drawback is I have to GO there, on THEIR hours.Think of FTP like a library.
  • 191. 182 Chapter 4 Deploy web applications on premises and in the cloudSure, maybe it’s not the most exciting protocol in the world. But FTP has beenaround since the 1970s, and it is still found in a large number of environments­simply because it is reliable and generally easy to use.However, like the library, FTP has some drawbacks. Like Simple Mail ­Transfer­Protocol (SMTP) and other older protocols, FTP was never designed to be ahighly secure protocol. In its default configuration, FTP users authenticate using a­user-name/password combination that is typically sent in clear text. The server canbe set up to allow users to connect anonymously as well.This has often made FTP servers the target of brute-force attacks, where attackerssimply try different user name and password combinations over and over until theyfind a valid combination. To mitigate this, there are several things you might do:■■ Block the “bad guy’s” IP address. This generally involves combing throughyour FTP log files to figure out the bad guy’s addresses, which can be very­time-consuming and, frankly, a little boring.■■ Create password lockout policies for user accounts. This was less of a manualprocess to institute, but it created a different problem. If the bad guy ­managedto find a valid user name, after a few failed attempts at authentication, the­password policy would lock the user account—which then means you have tospend time unlocking user accounts.Enter Windows Server 2012 and FTP Logon Attempt Restrictions. This feature takesthe best of both of these capabilities and combines them into one. The idea is this:You define the maximum number of failed logon attempts that you want to allow,and the time frame within which those attempts can take place. If the user fails tolog on correctly during that time frame, you can either tell the FTP server to writean entry to the log file or you can have the FTP server automatically deny accessfrom the requesting IP address. If you choose to deny the access, the FTP server willdrop the connection, and the IP address of the bad guy will be blocked.Two “gotchas” to keep in mind when configuring this feature:■■ Writing the entry to the log file does not block further logon attempts. It doesexactly what it says—it simply writes an entry to the log file.■■ The FTP Logon Attempt Restriction setting is defined for the server itself. It­cannot be defined on a per-site basis.So, using FTP Logon Attempt Restrictions will allow you to add a layer of security toyour humble, yet functional FTP service.David BranscomeSenior Premier Field Engineer
  • 192. Scalable and elastic web platform Chapter 4 183FIGURE 4-6  Configuring FTP Logon Attempt Restrictions.Learn moreFor instructions on how to configure FTP Logon Attempt Restrictions in IIS 8,see the ­article ­titled “IIS 8.0 FTP Logon Attempt Restrictions” on IIS.NET at see the article titled “FTP Logon Restrictions in IIS 8” in Robert McMurray’s blog onIIS.NET at Windows PowerShell scripts using IISConfiguration EditorAlthough IIS Manager lets you configure many aspects of IIS, there are a number of ­configurationsettings that are not exposed in the user interface. To configure these ­settings, you need todrill down and edit configuration files like ApplicationHost.config, the root configuration filethat includes detailed definitions of all sites, applications, virtual ­directories, and applicationpools on the server, as well as global defaults for all web server settings. These configurationfiles are schematized XML files, and you can either edit them in Notepad (yikes!) or use theConfiguration Editor, one of the management features in IIS Manager.
  • 193. 184 Chapter 4 Deploy web applications on premises and in the cloudNew in IIS 8 is the capability of using the Configuration Editor to generate a WindowsPowerShell script for any configuration changes that you make to your server using the­Configuration Editor. This capability can be particularly useful for cloud hosting providers whoneed to automate the configuration of large numbers of web servers because you can usesuch a generated script as a template for creating a finished script that can perform the taskthat you need to automate.Let’s see how this works. The section “Application Initialization,” earlier in this chapter,discussed how you can globally configure application pools on your server so that web­applications on the server are initialized before the first request comes in to access them. Toenable Application Initialization globally like this, you can edit the ApplicationHost.config fileso that the following line in the <applicationPools> section:<add name=".NET v4.5" managedRuntimeVersion="v4.0" />changes to this:<add name=".NET v4.5" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" />To do this using IIS Manager, open the Configuration Editor and select applicationPools inthe system.applicationHost/applicationPools section as shown here:
  • 194. Scalable and elastic web platform Chapter 4 185Then you expand applicationPoolDefaults and change startMode from OnDemand toAlwaysRunning:
  • 195. 186 Chapter 4 Deploy web applications on premises and in the cloudOnce you’ve applied this change, you can click the Generate Script item in the Actionspane. Doing this opens the Script Dialog dialog box, and on the PowerShell tab is a WindowsPowerShell script that you can customize to automate this configuration change on otherservers in your farm:Note that configuration of Application Initialization requires some additional steps. Formore ­information, see the article titled “IIS 8.0 Application Initialization” on IIS.NET at moreFor more information on generating Windows PowerShell scripts using IIS ConfigurationEditor, see the article titled “PowerShell script generation in IIS Configuration Editor” in WonYoo’s blog on IIS.NET at for open standardsSupport for open industry standards is important in a heterogeneous world. Platforms needto interoperate seamlessly so that companies can focus on doing business instead of ­solvingtechnical problems. Hybrid solutions are becoming the norm, and web hosting platformsneed to support a wide variety of different development paradigms and communicationprotocols so that innovation can continue to drive business forward.IIS 8 in Windows Server 2012 includes support for all the latest web standards and­protocols, such as the WebSocket protocol, HTML 5, Asynchronous JavaScript And XML(AJAX), and for both ASP.NET 3.5 and ASP.NET 4.5. Together with Windows Internet Explorer10 on the client running Windows Server 2012, and with the next version of the MicrosoftVisual Studio development platform, organizations will have everything they need to buildtomorrow’s web.
  • 196. Support for open standards Chapter 4 187WebSocketInteractive web applications developed using HTML 5 and AJAX need secure real-time­bidirectional communications between the web browser client and the web server. Supportfor WebSocket in IIS 8 brings just that. And although it’s designed to be implemented in webbrowsers and web servers, it can be used by any client or server application.How WebSocket worksWebSocket is a stable, open industry-standard protocol that is defined by the Internet­Engineering Task Force (IETF) in RFC 6455 that lets web servers push messages from the­server to the client instead of just letting the client pull messages from the server. It worksby establishing a bidirectional, full-duplex Transmission Control Protocol (TCP) socket thatis initiated by HTTP, which makes it easy for tunneling through proxies and firewalls. It alsoworks well with Layer 4 TCP load balancers. The protocol has low latency and low ­bandwidth­overhead, and it uses SSL for secure communications. For further details ­concerning howWebSocket communications are established, see the sidebar entitled “WebSocket ­handshake.”WebSocket handshakeTo establish a WebSocket connection, the client and server perform a­“handshake” where they agree that they both understand the same version ofWebSocket and the requested server resource supports WebSocket. The followingclient request and server response make up the handshake performed to establisha WebSocket connection.The following is a sample WebSocket request from a client:GET /sampleapp HTTP/1.1Host: contoso.comUpgrade: websocketConnection: UpgradeOrigin: http://contoso.comSec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==Sec-WebSocket-Version: 13IIS handling of WebSocket requestWhen the server evaluates this request, it notices that the Upgrade header is ­requestingthat the connection be upgraded to a WebSocket connection. The server responds withan HTTP 101 response indicating that the protocol is being changed to use WebSocket.IIS implements a native WebSocket module on the IIS request pipeline architecture,which applications can use to communicate over WebSocket. The IIS WebSocketmodule listens on the RQ_SEND_RESPONSE notification of the request pipeline.
  • 197. 188 Chapter 4 Deploy web applications on premises and in the cloudOn send response notification (before the response is returned to the client), ifthe HTTP status code of the response is 101, IIS calls into the Websocket.dll (theWin32 library in Windows Server 2012, which implements WebSocket framing). The­WebSocket dll then computes a value for the Sec-WebSocket-Accept header basedon the value of the Sec-WebSocket-Key from the request header.These values are then set into the response headers. On the send call to HTTP, IISalso sets the HTTP_SEND_RESPONSE_FLAG_OPAQUE flag, which indicates that HTTPshould go into opaque mode. This flag tells HTTP that the request and responsefrom that point on will not be HTTP-compliant, and all subsequent bytes should betreated as an entity-body and appends the Sec-WebSocket-Accept header to theresponse.The following is a sample response from a server:HTTP/1.1 101 Switching ProtocolsUpgrade: websocketConnection: UpgradeSec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=The server response indicates to the client that it is switching to WebSocket andreturns the result of the operation that it performed on the Sec-WebSocket-Key inthe Sec-WebSocket-Accept header. The client uses this to confirm that the serverproperly understands WebSocket. This concludes the handshake.Using the WebSocket connectionIf the handshake is successful, applications can get a pointer to the IWebSocketContextinterface from the IHttpContext of the request. The IWebSocketContext interface isstored in the Named Context containers of IHttpContext. Applications can query thenamed context container with the query key “websockets” to get the pointer to thisinterface.Applications can then do WebSocket I/O through the application ­programming­interfaces (APIs) exposed by this ­interface. WriteFragment, ReadFragment,­SendConnectionClose, GetCloseStatus, and ­CloseTcpConnection are the APIs­implemented for Windows Server 2012.Shaun Eagan, Senior Program ManagerJenny Lawrance, Software Design Engineer IIWade Hilmo, Principal Development LeadAspaan Kamboj, Software Design Engineer in TestPandian Ramakrishnan, Software Design Engineer in TestMicrosoft Internet Information Services Team
  • 198. Support for open standards Chapter 4 189Learn moreFor more information on WebSocket, see the following resources:■■ The article “WebSockets in ASP.NET” in the TechNet Wiki at■■ The article “WebSockets” in the Internet Explorer Developer Center on MSDN at■■ RFC 6455 on the RFC Editor site at for HTML 5HTML 5 is an open, industry-standard markup language being developed by the World WideWeb Consortium (W3C) and the Web Hypertext Application Technology Working Group(WHATWG). At present, it consists of more than 100 different specifications that define thenext generation of web application technologies. The actual name “HTML 5” can be thoughtof as a kind of umbrella term that defines a collection of different HTML, Cascading StyleSheets (CSS), and JavaScript specifications that allow developers to create rich, interactive webapplications using asynchronous script execution, drag-and-drop APIs, sandboxing, channelmessaging, and other advanced capabilities.IIS 8 in Windows Server 2012 includes built-in support for the latest HTML5 ­standards.Together with Internet Explorer 10 running on Windows Server 2012 and with the ­upcomingrelease of Visual Studio 11, businesses will have all the tools and platforms needed to buildthe modern, interactive web.Learn moreFor more information on HTML 5 support in upcoming Microsoft products, see the followingresources:■■ The article “Building Apps with HTML5: What You Need to Know” in MSDN Magazineat■■ The article “HTML5” in the Internet Explorer Developer Center on MSDN at■■ HTML5Labs (, where Microsoft prototypes early and unstablespecifications from web standards bodies such as W3C.■■ Visual Studio 2012, which can be downloaded from
  • 199. 190 Chapter 4 Deploy web applications on premises and in the cloudUp nextThe next and final chapter describes how Windows Server 2012 helps enable the modernwork environment by enabling secure access virtually anywhere, from any device, with the fullWindows experience.
  • 200. 191C H A P T E R 5Enabling the modern­workstyle■ Access virtually anywhere, from any device  191■ Full Windows experience  215■ Enhanced security and compliance  221■ Conclusion  227The final chapter of this book deals with how Windows Server 2012 can enhancethe modern workplace. Today’s business users want things simple. They want tobe able to access their desktop, applications, and data virtually anywhere, from any­device, and have the full Windows experience. And from an IT perspective, this must bedone ­securely and in ways that can ensure compliance at all times. New features and­enhancements in Windows Server 2012 make this possible.Access virtually anywhere, from any deviceIf you are an office worker in today’s accelerated business world, you need to be ableto access your applications and data from any device—your personal computer, mobilecomputer, tablet computer, or other mobile device. And if you are an IT person involvedin supporting such an environment, you want to be able to implement such capabilitieseasily and without hassles or additional costs.Improvements in several Windows Server 2012 features now make it simple todeploy, configure, and maintain an IT infrastructure that can meet the needs of themodern workstyle. Remote access is now an integrated solution that you can use todeploy ­DirectAccess and traditional virtual private network (VPN) solutions quickly.­Enhancements to Remote Desktop Services now make it easier than ever to deployboth session-based desktops and virtual desktops and to manage your RemoteAppprograms centrally. User-Device Affinity now makes it possible for you to map roamingusers to specific computers and devices. BranchCache has been enhanced to improve­performance and make better use of expensive wide area network (WAN) bandwidth.And Branch Office Direct Printing enables branch office users to get their print jobs donefaster while putting less strain on the WAN.
  • 201. 192 Chapter 5 Enabling the modern ­workstyleUnified remote accessToday’s enterprises face an increasingly porous perimeter for their IT infrastructures. With alarger portion of their workforce being mobile and needing access to mobile data, enterprisesare presented with new security challenges to address. Cloud computing promises to helpresolve some of these issues, but the reality is that most organizations will deploy a hybridcloud model that combines traditional datacenter computing with hosted cloud services.Providing remote access to corporate network resources in a secure, efficient, and­cost-effective way is essential for today’s businesses. The previous version of Windows Serversupported a number of different options for implementing remote access, including:■■ Point-to-Point Tunneling Protocol (PPTP) VPN connections■■ Layer 2 Transport Protocol over IPsec (L2TP/IPsec) VPN connections■■ Secure Sockets Layer (SSL) encrypted Hypertext Transfer Protocol (HTTP) VPN­connections using the Secure Socket Tunneling Protocol (SSTP)■■ VPN Reconnect, which uses Internet Protocol Security (IPsec) Tunnel Mode with­Internet Key Exchange version 2 (IKEv2)■■ DirectAccess, which uses a combination of Public Key Infrastructure (PKI), IPsec, SSL,and Internet Protocol version 6 (IPv6)Implementing remote access could be complex in the previous version of Windows Serverbecause different tools were often needed to deploy and manage these different solutions.For example, the Remote Access and Routing (RRAS) component was used for implementingVPN solutions, whereas DirectAccess was configured separately using other tools.Beginning with Windows Server 2012, however, the process of deploying a remote access­solution has been greatly simplified by integrating both DirectAccess and VPN ­functionality into asingle Remote Access server role. In addition, functionality for managing remote ­access ­solutionsbased on both DirectAccess and VPN has now been unified and integrated into the new ServerManager. The result is that Windows Server 2012 now provides you with an ­integrated remote­access solution that is easy to deploy and manage. Note that some ­advanced RRAS features, suchas routing, are configured using the legacy Routing and ­Remote Management console.Simplified DirectAccessIf remote client devices can be always connected, users can work more productively. Devicesthat are always connected are also more easily managed, which helps improve complianceand reduce support costs. DirectAccess, first introduced in Windows Server 2008 R2 and­supported by client devices running Windows 7, helps address these needs by giving usersthe experience of being seamlessly connected to their corporate network whenever they haveInternet access. DirectAccess does this by allowing users to access corpnet resources such asshared folders, websites, and applications remotely, in a secure manner, without the needof first establishing a VPN connection. DirectAccess does this by automatically establishing­bidirectional connectivity between the user’s device and the corporate network every timethe user’s device connects to the Internet.
  • 202. Access virtually anywhere, from any device Chapter 5 193DirectAccess alleviates the frustration that remote users often experience when usingtraditional VPNs. For example, connecting to a VPN usually takes several steps, during whichthe user needs to wait for authentication to occur. And if the corporate network has NetworkAccess Protection (NAP) implemented for checking the health of computers before allowingthem to connect to the corporate network, establishing a VPN connection could sometimestake several minutes or longer depending on the remediation require, or the length of time ofthe user’s last established the VPN connection. VPN connections can also be problematic forenvironments that filter out VPN traffic, and Internet performance can be slow for the user ifboth intranet and Internet traffic route through the VPN connection. Finally, any time userslose their Internet connection, they have to reestablish the connection from scratch.DirectAccess solves all these problems. For example, unlike a traditional VPN connection,DirectAccess connectivity is established even before users log on so that they never have tothink about connecting resources on the corporate network or waiting for a health checkto complete. DirectAccess can also separate intranet traffic from Internet traffic to reduce­unnecessary traffic on the corporate network. Because communications to the Internet do nothave to travel to the corporate network and back to the Internet, as they typically do when­using a traditional VPN connection, DirectAccess does not slow down Internet access for ­users.Finally, DirectAccess allows administrators to manage remote computers outside the officeeven when the computers are not connected via a VPN. This also means that remote ­computersare always fully managed by Group Policy, which helps ensure that they are secure at all times.In Windows Server 2008 R2, implementing DirectAccess was a fairly complex task and­required performing a large number of steps, including some command-line tasks thatneeded to be performed both on the server and on the clients. With Windows Server 2012,however, deploying and configuring DirectAccess servers and clients is greatly simplified. Inaddition, DirectAccess and traditional VPN remote access can coexist on the same server,making it possible to deploy hybrid remote access solutions that meet any business need.Finally, the Remote Access role can be installed and configured on a Server Core installation.DirectAccess—Making “easy” easierDirectAccess with Windows 7 and Windows Server 2008 R2 was a tremendousimprovement in remote access technologies. In my role, I work remotely almost100 percent of the time—either at a customer site or from home—so my laptop israrely physically connected to Microsoft’s internal network.However, I often need to access internal resources for my work. Now, I could­connect over the Microsoft VPN, which in my case requires plugging in a ­smart-cardreader, inserting the smart card, and entering a PIN. Certainly not a terrible­experience, but we all prefer “EASY.”DirectAccess is easy. If I have Internet connectivity, the odds are pretty good that I haveDirectAccess connectivity. I say “pretty good” because like many technologies, there
  • 203. 194 Chapter 5 Enabling the modern ­workstyleare times when something prevents it from working. The question is “What is thatsomething?” Troubleshooting DirectAccess connectivity can be difficult in Windows 7.With Windows 8, the client experience is much better. The properties of your­DirectAccess connection are easily accessible through the network’s user interface.This interface will show you what your current DirectAccess status is and will offerremediation options if you are not currently connected. Additionally, in scenarioswhere there may be multiple network entry points for DirectAccess users, the­interface will display the current site you are connected to and allow you to connectto a different site entry point if necessary.If all else fails, though, the properties page also allows the client to collect­DirectAccess logs (stored in a very readable HTML format) and email them to yoursupport staff to assist in the troubleshooting process.Of course, it wouldn’t qualify as a “cool technology” unless you could shut it off andprevent people from using it! So naturally, being able to configure the support staffemail address, providing users with the ability to switch to a different entry pointand even the ability to disconnect from DirectAccess temporarily can be controlledthrough a Group Policy Object (GPO).DirectAccess deployment scenariosWhen deploying DirectAccess on Windows Server 2012, keep in mind that there aretwo types of deployment scenarios: Express Setup and Advanced Configuration. Ata high level, the differences between the two are given in this table:Express Setup Advanced ConfigurationPKI is optional PKI and CA requiredUses a single IPSec tunnel­configurationUses double IPSec tunnel configurationRequires Windows 8 clients Can use single factor, dual factor, and certificate­authenticationSupports clients running both Windows 8 and Windows 7Required when designing a multisite configurationDavid BranscomeSenior Premier Field EngineerDirectAccess enhancementsBesides simplified deployment and unified management, DirectAccess has been enhanced inother ways in Windows Server 2012. For example:■■ You can implement DirectAccess on a server that has only one network adapter. Ifyou do this, IP-HTTPS will be used for client connections because it enables
  • 204. Access virtually anywhere, from any device Chapter 5 195­DirectAccess clients to connect to internal IPv4 resources when other IPv4 transitiontechnologies such as Teredo cannot be used. IP-HTTPS is implemented in WindowsServer 2012 using NULL encryption, which removes redundant SSL encryption duringclient communications to improve performance.■■ You can access a DirectAccess server running behind an edge device such as a ­firewallor network address translation (NAT) router, which eliminates the need to have­dedicated public IPv4 addresses for DirectAccess. Note that deploying DirectAccessin an edge configuration still requires two network adapters, one connected directlyto the Internet and the other to your internal network. Note also that the NAT devicemust be configured to allow traffic to and from the Remote Access server.■■ DirectAccess clients and servers no longer need to belong to the same domain but canbelong to any domains that trust each other.■■ In Windows Server 2008 R2, clients had to be connected to the corporate network inorder to join a domain or receive domain settings. With Windows Server 2012 ­however,clients can join a domain and receive domain settings remotely from the Internet.■■ In Windows Server 2008 R2, DirectAccess always required establishing two IPsec­connections between the client and the server; in Windows Server 2012 only one IPsecconnection is required.■■ In Windows Server 2008 R2, DirectAccess supported both IPsec authentication andtwo-factor authentication by using smart cards; Windows Server 2012 adds supportfor two-factor authentication using a one-time password (OTP) in order to provideinteroperability with OTP solutions from third-party vendors. In addition, DirectAccesscan now use the Trusted Platform Module (TPM)–based virtual smart card capabilitiesavailable in Windows Server 2012, whereby the TPM of clients functions as a virtualsmart card for two-factor authentication. This new approach eliminates the overheadand costs incurred by smart card deployment.Deploying remote accessTo see unified remote access at work, let’s walk through the initial steps of deploying a­DirectAccess solution. Although we’ve used the UI for performing the steps described below,you can also use Windows PowerShell. You can also deploy the Remote Access role on a­Windows Server Core installation of Windows Server 2012.After making sure that all the requirements have been met for deploying a ­DirectAccess­solution (for example, by making sure your server is domain-joined and has at leastone ­network adapter), you can start the Add Roles And Features Wizard from Server­Manager. Then, on the Select Installation Type page, begin by selecting the Role-based Or­Feature-based Installation option, as shown here:
  • 205. 196 Chapter 5 Enabling the modern ­workstyleAfter choosing the server(s) you want to install remote access functionality on, select theRemote Access role on the Select Server Roles page:
  • 206. Access virtually anywhere, from any device Chapter 5 197On the Select Role Services page, select the DirectAccess And VPN (RAS) option, as shown here:Continue through the wizard to install the Remote Access server role. Once this is finished,click the Open The Getting Started Wizard link on the Installation Progress page shown hereto begin configuring remote access:
  • 207. 198 Chapter 5 Enabling the modern ­workstyleWindows Server 2012 presents you with three options for configuring remote access:■■ Deploying both DirectAccess and VPN server functionality so that DirectAccess can beused for clients running Windows 7 or later while the VPN server can be used so thatclients that don’t support DirectAccess can connect to your corporate network via VPN■■ Deploying only DirectAccess, which you might choose if all your clients are runningWindows 7 or later■■ Deploying only a VPN server, which you might use if you’ve invested heavily in­third-party VPN client software and you want to continue using these investmentsLet’s choose the recommended option by selecting the Deploy Both DirectAccess AndVPN option:On the Remote Access Server Setup page of the Configure Remote Access wizard, you nowchoose the network topology that best describes where your DirectAccess server is located.The three options available are:■■ Edge, which requires that the server have two network interfaces, one connected tothe public Internet and one to the internal network■■ Behind An Edge Device (With Two Network Adapters), which again requires that aserver has two network interfaces with the DirectAccess server being located behinda NAT device■■ Behind An Edge Device (Single Network Adapter), which only requires the server(located behind a NAT device) to have one network interface connected to the internalnetwork
  • 208. Access virtually anywhere, from any device Chapter 5 199Because the server used in this walkthrough has only one network adapter and is locatedbehind a NAT inside, we’ll choose the third option listed here. We’ll also specify as the Domain Name System (DNS) name to which the DirectAccess clients willconnect:Note that if the server has two network interfaces, with one connected to the Internet, theConfigure Remote Access wizard will detect this and configure the two interfaces as needed.When you are ready to finish running the Configure Remote Access wizard, you will bepresented with a web-based report of the configuration changes that the wizard will makebefore you apply them to your environment. For example, performing the steps previouslydescribed in this walkthrough will result in the following changes:■■ A new GPO called DirectAccess Server Settings will be created for your DirectAccess server.■■ A new GPO called DirectAccess Client Settings will be created for your DirectAccess clients.■■ DirectAccess settings will be applied to all mobile computers in the CONTOSODomainComputers security group.■■ A default web probe will be created to verify internal network connectivity.■■ A connection name called Workplace Connection will be created on DirectAccess clients.■■ The remote access server has DirectAccess configured to use asthe public name to which remote clients connect.■■ The network adapter connected to the Internet (via the NAT device) will be identifiedby name.
  • 209. 200 Chapter 5 Enabling the modern ­workstyle■■ Configuration settings for your VPN server will also be summarized; for example, howVPN client address assignment will occur (via DHCP server) and how VPN clients will beauthenticated (using Windows authentication).■■ The certificate used to authenticate the network location server deployed on theRemote Access server, which in the above walkthrough was, is identified.Configuring and managing remote accessDeploying the Remote Access server role also installs some tools for configuring and­managing remote access in your environment. These tools include:■■ The Remote Access Management Console (see Figure 5-1), which can be started fromServer Manager■■ The Remote Access module for Windows PowerShellFIGURE 5-1  The Remote Access Management Console is integrated into Server Manager.
  • 210. Access virtually anywhere, from any device Chapter 5 201In addition to allowing you to monitor the operational status of your remote access ­serversand clients, the Remote Access Management Console enables you to perform an additionalc­onfiguration of your remote access environment (see Figure 5-2).FIGURE 5-2  Using the Remote Access Management Console to perform additional configuration of aremote access environment.The Configuration page of the Remote Access Management Console lets you perform­additional configuration if needed (or initial configuration if desired) in four areas:■■ Step 1: Remote Clients  Lets you select between two DirectAccess scenarios:■■ Deploying full DirectAccess for client access and remote management so that­remote users can access resources on the internal network and their computers canbe managed by policy■■ Deploying DirectAccess for remote management only so that the computers ofremote users can be managed by policy but the users cannot access resources onthe internal network
  • 211. 202 Chapter 5 Enabling the modern ­workstyleYou can also select which group or groups of computers will be enabled for­DirectAccess (by default, the Domain Computers group), choose whether to enableDirectAccess for mobile computers only (enabled by default), and choose whether touse force tunneling so that DirectAccess clients connect to both the internal networkand the Internet via the Remote Access server (disabled by default).■■ Step 2: Remote Access Server  Lets you configure the network topology of theRemote Access server (but only if not previously configured), the public name or IPv4address used by clients to connect to the server, which network adapter is for theinternal network, which certificate to use to authenticate IP-HTTPS connections, howuser authentication is performed, whether to enable clients running Windows 7 toconnect via DirectAccess, and how your VPN server assigns IP addresses and performsauthentication■■ Step 3: Infrastructure Servers  Lets you configure the name of your network­location server for DirectAccess clients, DNS settings for remote access, and other ­settings■■ Step 4: Application Servers  Lets you specify whether to extend IPsec authenticationand encryption to selected application servers on your internal networkDirectAccess advantages over traditional VPNsDirectAccess was originally released with Windows 7 and Windows Server 2008R2. Many people think of it as another VPN solution. However, it is morethan just a VPN. A traditional VPN is initiated by the user after he or she logs onto the computer. DirectAccess creates the connection to the corporate networkin the ­operating system before the user even sees the logon screen. ­DirectAccess­connection is a virtual extension of the corporate network. No matter wherethe computer physically resides, so long as it has Internet access, it is a part of the­corporate network and the user has access to available corporate resources.The fact that a computer is now always a part of the corporate network, even whenit is on the Internet, provides advantages to a company—especially for a ­companywith many people that travel frequently (that is, the road warriors). Without­DirectAccess, once a computer leaves the corporate doors, it becomes increasinglydifficult to manage. The only time that IT will “see” the computer is when the userVPNs into the corporate network to access needed resources. For many, the onlyresource that they need is their email or instant messages. Advances such as RemoteProcedure Calls (RPCs) over HTTP in Microsoft Exchange Server, Microsoft Outlook,and Microsoft Lync further limit the number of times a user would need to createa connection to corporate. If this is all the user needs, it’s possible that he or shemight never VPN back into the corporate network.
  • 212. Access virtually anywhere, from any device Chapter 5 203By introducing DirectAccess to the corporate environment, IT now has the­advantage of treating the remote computers just as they would if the machine werestill inside the corporate walls. The computers can now be managed, patched, andinventoried just as every asset inside the corporation is. IT is no longer hoping thatthe user keeps up to date with operating system patches and antivirus signatures. ITnow can push these updates to the remote computers, using the tools they alreadyuse in-house, and report on the status of all remote computers.The ability to be able to report on asset status and maintain accurate asset­inventory could also be a potential financial advantage for a company. Frequently,IT groups that I work with report that when an asset leaves the corporation, theydo not know if they will ever see the asset again. Machines can move from personto person. The computers are used in a fashion that does not require the user to­connect back into the corporate network, so no accurate inventory can be kept.(One customer of mine reported that at any given time, they could have over 1,000machines that they would not be able to track down, and they are written off asa loss.) DirectAccess gives IT the ability to keep track of these assets because themachines “stay” on the corporate network now.DirectAccess is one of the first applications that require IPv6. The client, as wellas the internal corporate resource, must both be running IPv6 in order for theclient machine to be able to successfully access the resource. Although IPv6 is­beginning to be adopted by Internet sites, corporations have been slower to pickup the ­technology. This is one reason that a company may not consider adopting­DirectAccess.Lack of IPv6 in the corporate network does not need to be a roadblock to­implementing DirectAccess. As part of the Forefront family of products, Microsofthas a product called Unified Access Gateway, or UAG (­en-us/server-cloud/forefront/unified-access-gateway.aspx). One of the functions ofUAG is that it can act as an IPv6-to-IPv4 gateway. This gateway functionality can­allow the client on DirectAccess to access internal resources that are not yet on IPv6.UAG can be implemented in two ways. First, the software can be acquired and­implemented on hardware in the customer site in a similar fashion to most­Microsoft products. Second, the product can also be acquired on a hardware­appliance from a third-party partner.Ian S. LindsaySr. Account Technology Strategist, Microsoft Mid-Atlantic District
  • 213. 204 Chapter 5 Enabling the modern ­workstyleLearn moreFor more information about remote access in Windows Server 2012, see the following topicsin the TechNet Library:■■ “Remote Access (DirectAccess, Routing and Remote Access) Overview“ at■■ “Remote Access Technical Preview” at■■ “Simplified Remote Access with DirectAccess: scenario overview” at also “Understand and Troubleshoot Remote Access in Windows Server ‘8’ Beta,” whichyou can download from VDI deploymentVirtual desktop infrastructure (VDI) is an emerging alternative to the traditional PC-baseddesktop computing paradigm. With the VDI approach, users access secure, centrally managedvirtual desktops running on virtualization hosts located in the datacenter. Instead of havinga standard PC to work with, VDI users typically have less costly thin clients that have no harddrive and only minimal processing power.A typical environment where the VDI approach can provide benefits might be a call centerwhere users work in shifts using a shared pool of client devices. In such a scenario, VDI canprovide greater flexibility, more security, and lower hardware costs than providing eachuser with his or her own PC. The VDI approach can also bring benefits to organizations that­frequently work with contractors because it eliminates the need to provide contractors withPCs and helps ensure that corporate intellectual property remains safely in the datacenter.A help desk also benefits from the VDI approach because it’s easier to re-initialize failed­virtual machines remotely than with standard PCs.Although implementing a VDI solution may be less expensive than provisioning PCs tousers, VDI can have some drawbacks. The server hardware for virtualization hosts running­virtual desktops must be powerful enough to provide the level of performance that usershave come to expect from using desktop PCs. Networking hardware must also be fast enoughto ensure that it doesn’t become a performance bottleneck. And in the past, ­deploying andmanaging virtual desktops using previous Windows Server versions has been more complexthan ­deploying and managing PCs because it requires deploying RDS with Hyper-V in yourenvironment.Windows Server 2012, however, eliminates the last of these drawbacks by simplifying theprocess by which virtual desktops are deployed and managed. The result is that VDI can nowbe a viable option to consider even for smaller companies who are looking for efficienciesthat can lead to cost savings for their organization.
  • 214. Access virtually anywhere, from any device Chapter 5 205Deployment types and scenariosWindows Server 2012 introduces a new approach to deploying the Remote Desktop Servicesserver role based on the type of scenario you want to set up in your environment:■■ Session virtualization  Lets remote users connect to sessions running on a Remote­Desktop Session Host to access session-based desktop and RemoteApp programs■■ VDI  Lets remote users connect to virtual desktops running on a Remote DesktopVirtualization Host to access applications installed on these virtual desktops (and alsoRemoteApp programs if session virtualization is also deployed)Whichever RDS scenario you choose to deploy, Windows Server 2012 gives you two­options for how you can deploy it:■■ Quick Start  This option deploys all the RDS role services required on a ­single­computer using mostly the default options and is intended mainly for test­environments.■■ Standard deployment  This option provides you with more flexibility concerninghow you deploy different RDS role services to different servers and is intended for­production environments.RDS enhancementsBesides enabling scenario-based deployment of RDS role services like Remote Desktop SessionHost, Remote Desktop Virtualization Host, Remote Desktop Connection Broker, and RemoteDesktop Web Access, RDS in Windows Server 2012 includes other enhancements such as:■■ A unified administration experience that allows you to manage your RDS-based­infrastructure directly from Server Manager■■ Centralized resource publishing that makes it easier to deploy and manage ­RemoteAppprograms for both session virtualization and VDI environments■■ A rich user experience using the latest version of Remote Desktop Protocol (RDP),including support for RemoteFX over WAN■■ USB Redirection, for enhanced device remoting for both session virtualization and VDIenvironments■■ User profile disks that let you preserve user personalization settings across collectionsof sessions or pooled virtual desktops■■ The ability to automate deployment of pooled virtual desktops by using a virtual­desktop template■■ Support for using network shares for storing personal virtual desktops■■ Support for Storage Migration between host machines when using pooled virtual desktops
  • 215. 206 Chapter 5 Enabling the modern ­workstyleSome of these enhancements are described in more detail later in this chapter in the­section titled “Full Windows experience.” Because session virtualization has been aroundmuch longer on the Windows Server platform, the remainder of this section will focus on VDI.Virtual desktops and collectionsA virtual desktop is a virtual machine running on a Hyper-V host that users can connectto remotely using RDS. A collection consists of one or more virtual desktops used in a VDI­deployment scenario. Virtual desktops can either be managed or unmanaged:■■ Managed collections  These can be created from an existing virtual machinethat has been sysprepped so it can be used as a template for creating other virtual­desktops in the collection.■■ Unmanaged collections  These can be created from an existing set of virtual­desktops, which you then add to the collection.Virtual desktops can also be pooled or personal:■■ Pooled virtual desktops  This type allows the user to log on to any virtual desktopin the pool and get the same experience. Any customizations performed by the useron the virtual desktop are saved in a dedicated user profile disk. (See the section titled“User Profile Disks” later in this chapter for more information.)■■ Personal virtual desktops  This type permanently assigns a separate virtual ­desktopto each user account. Each time the user logs on, he or she gets the same virtual­desktop, which can be customized as desired, with customizations being saved withinthe virtual desktop itself.Table 5-1 summarizes some of the differences between pooled and personal virtual ­desktopswhen they are configured as managed virtual desktops, whereas Table 5-2 lists similar kinds ofdifferences between them when they are configured as unmanaged virtual desktops.TABLE 5-1  Comparison of pooled and personal managed virtual desktopsCapability Pooled? Personal?New virtual desktop creation basedon virtual desktop template✓ ✓Re-create virtual desktop based on virtual desktop ­template ✓Store user settings on a user profile disk ✓Permanent user assignment to the virtual desktop ✓Administrative access on the virtual desktop ✓
  • 216. Access virtually anywhere, from any device Chapter 5 207TABLE 5-2  Comparison of pooled and personal unmanaged virtual desktopsCapability Pooled? Personal?New virtual desktop creation based on virtual desktop templateRe-create virtual desktop based on virtual desktop templateStore user settings on a user profile disk ✓Permanent user assignment to the virtual desktop ✓Administrative access on the virtual desktop ✓Deploying VDITo see simplified VDI deployment at work, let’s walk through the initial steps of deployinga Quick Start VDI deployment. Begin by starting the Add Roles And Features Wizard fromServer Manager. Then on the Select Installation Type page, begin by selecting the RemoteDesktop Services Scenario-based Installation option, as shown here:
  • 217. 208 Chapter 5 Enabling the modern ­workstyleSelect the Quick Start option on the Select Deployment Type page:On the Select Deployment Scenario page of the wizard, choose the Virtual Desktop­Infrastructure option:
  • 218. Access virtually anywhere, from any device Chapter 5 209Select a server from your server pool for deploying RDS role services onto:A compatibility check will be performed at this point to ensure that the selected server meetsall the requirements for implementing the selected deployment scenario. If no ­compatibility issuesare detected, the next wizard page appears, which prompts you to select a virtual disk template(a .vhd or .vhdx file) on which a VDI-capable client operating system like Windows 8 or Windows 7has been installed, along with any locally installed ­applications needed on the virtual desktop. TheWindows installation on this VHD must have been prepared by running sysprep /generalize on it sothat it can function as a reference ­image for adding new virtual desktops to your collection.
  • 219. 210 Chapter 5 Enabling the modern ­workstyleCompleting the wizard and clicking Deploy begins the process of deploying yourVDI ­environment. Three RDS role services (Connection Broker, RD Virtualization Host,and RD Web Access) are first installed on the selected server, which is then restarted to­complete ­installation of these role services. A virtual desktop template is then createdfrom the ­previously specified VHD file, and a new pooled virtual desktop collection named­QuickVMCollection is created with two pooled virtual desktops based on the virtual desktoptemplate:The VDI deployment process also creates a new Hyper-V network switch named RDS­Virtual and assigns the pooled virtual desktops to that switch.Managing VDIOnce the Quick Start VDI deployment process is finished, you can manage your VDI­environment by using the Remote Desktop Services option that now appears in Server­Manager. For example, the Overview page of the Remote Desktop Services option ­providesyou with visual information concerning your RDS infrastructure, virtualization hosts, and­collections (see Figure 5-3). You can use the Remote Desktop Services option in Server­Manager to configure your RDS role services, manage your virtualization hosts, create newcollections, and perform other VDI-related tasks.
  • 220. Access virtually anywhere, from any device Chapter 5 211FIGURE 5-3  The Remote Desktop Services option in Server Manager.Learn moreFor more information about simplified VDI deployment in Windows Server 2012, see the­following topics in the TechNet Library:■■ “Remote Desktop Services Technical Preview” at■■ “Remote Desktop Services overview” at see the following Understand and Troubleshoot Guides (UTGs):■■ “Understand and Troubleshoot Remote Desktop Services in Windows Server ‘8’Beta,” which can be downloaded from
  • 221. 212 Chapter 5 Enabling the modern ­workstyle■■ “Understand and Troubleshoot Remote Desktop Services Desktop Virtualization inWindows Server ‘8’ Beta,” which can be downloaded from more information on RDS enhancements in Windows Server 2012, see the RemoteDesktop Services (Terminal Services) Team Blog at AffinityPrevious versions of the Windows platform have included three features for supporting­roaming users, namely roaming user profiles (RUPs), Folder Redirection (FR), and OfflineFiles. What was missing was a way of associating each user profile with specific computers ordevices. Windows Server 2012 and Windows 8 now provide such functionality in the form ofUser-Device Affinity, which lets you map a user to a limited set of computers where RUP or FRis used. As a result, administrators can control on which computers RUPs and offline files arestored.User-Device Affinity benefits organizations by enabling new types of scenarios. For­example, you could configure the environment so the user’s data and settings can be roamedbetween the user’s desktop PC and his or her laptop but cannot be roamed to any othercomputers. That way, for example, when the user logs on to a shared computer in the publicfoyer of the building, there is no danger that the user’s personal or corporate data will be leftbehind on the computer.Configuring User-Device AffinityUser-Device Affinity can be implemented using Group Policy by configuring the EvaluateUser Device Affinity Configuration For Roaming Profiles And Folder Redirection policy settingfound under SystemUser State Technologies. When you enable this policy setting, you canselect from three possible configuration options:■■ Apply To Neither Roaming Profiles Nor Folder Redirection  Disables the primarycomputer check when logging on■■ Apply To Roaming Profiles And Folder Redirection Only  Roams the user profileand applies FR only when logging on to primary computers■■ Apply To Roaming Profiles Only  Roams the user profile when logging on to­primary computers, and always applies FRLearn moreFor more information about User-Device Affinity in Windows Server 2012 andWindows 8, see the topic ”Folder Redirection, Offline Files, and Roaming User Profiles­overview” in the TechNet Library at
  • 222. Access virtually anywhere, from any device Chapter 5 213Enhanced BranchCacheBranchCache was first introduced in Windows Server 2008 R2 and Windows 7 as a way ofcaching content from file and web servers on a WAN locally at branch offices. When ­anotherclient at the branch office requests the same content, the client downloads it from the­local cache instead of downloading it across the WAN. By deploying BranchCache, you can­increase the network responsiveness of centralized applications that are being accessed fromremote offices, with the result that branch office users have an experience similar to beingdirectly connected to the central office.BranchCache has been enhanced in Windows Server 2012 and Windows 8 in a number ofdifferent ways. For example:■■ The requirement of having a GPO for each branch office has been removed to simplifythe deployment of BranchCache.■■ BranchCache is tightly integrated with the File Server role and can use the new DataDeduplication capabilities of Windows Server 2012 to provide faster download timesand reduced bandwidth consumption over the WAN.■■ When identical content exists in a file or multiple files on either the content serveror hosted cache server, BranchCache stores only a single instance of the content andclients at branch offices download only a single instance of duplicated content. Theresults are more efficient use of disk storage and savings in WAN bandwidth.■■ BranchOffice provides improved performance and reduced bandwidth usage by­performing offline calculations that ensure content information is ready for the firstclient that requests it.■■ New tools are included in Windows Server 2012 that allow you to preload cachablecontent onto your hosted cache servers even before the content is first requested byclients.■■ Cached content is encrypted by default to make it more secure.■■ Windows PowerShell can be used to manage your BranchCache environment, which­enables automation that makes it simpler to deploy BranchCache in cloud computing­environments.Learn moreFor more information about BranchCache and related technologies in Windows Server 2012,see the following topics in the TechNet Library:■■ “BranchCache Overview” at■■ “Data Deduplication overview” at
  • 223. 214 Chapter 5 Enabling the modern ­workstyleBranch Office Direct PrintingBranch Office Direct Printing is a new feature of Windows Server 2012 that enables print jobsfrom a branch office to be redirected to local printers without the requirement of first havingthem sent to a print server on the network. As a result, when a print job is initiated from abranch office, the printer configuration and drivers are still accessed from the print server ifneeded, but the print job itself is sent directly to the local printer at the branch office.Implementing this feature has several benefits, including reducing printing time at branchoffices and making more efficient use of costly WAN bandwidth. In addition, cost can be­reduced because you no longer need to deploy costly WAN optimization appliances atbranch offices specifically for printing purposes.Enabling Branch Office Direct PrintingBranch Office Direct Printing is a new feature in Windows Server 2012 designedto reduce network bandwidth in printing situations when your print serveris centralized or located across a WAN link. When Branch Office Direct Printingis enabled, the print traffic from the client to the printer does not need to routethrough the server. Instead the client gets the port and driver information from theserver and then prints directly to the printer, saving the traversal of data across awide area connection. Branch Office Direct Printing can be enabled on an individualprint queue and requires no interaction from the client to use. Once a print queueis established on a client, the information is cached in the event that the centralizedprinter server is unavailable. This is ideal in situations where local printing must beavailable during a WAN outage.To enable Branch Office Direct Printing, open the Print Management Console, selectthe desired printer queue that you wish to designate as a branch printer, and selectEnable Branch Office Direct Printing from the Actions menu:John YokimAccount Technology Strategist, Microsoft Mid-Atlantic District
  • 224. Full Windows experience Chapter 5 215Learn moreFor more information about Branch Office Direct Printing, see the topic “Printand Document Services overview” in the TechNet Library at also “Understand and Troubleshoot Printing in Windows Server ‘8’ Beta,” which can bedownloaded from Windows experienceToday’s users expect and demand the full Windows experience, even when they work in­virtual environments. Windows Server 2012 delivers this experience better than ever beforewith enhancements to RemoteFX, USB redirection, and the new User Profile Disks feature.This section introduces these new features and enhancements.RemoteFX enhancementsRemoteFX was first introduced in Windows Server 2008 R2 as a way of delivering a fullWindows experience over the RDP across a wide variety of client devices. RemoteFX is part ofthe Remote Desktop Services role service and is intended mainly for use in VDI environmentsto support applications that use rich media, including 3-D rendering. RemoteFX uses twocapabilities for providing remote users with a rich desktop environment similar to the localdesktop environment that PC users enjoy:■■ Host side rendering  Allows graphics to be rendered on the host instead of the ­clientby utilizing the capabilities of a RemoteFX-capable graphics processing unit (GPU) onthe host. Once rendered on the host, graphics are delivered to the client over RDP inan adaptive manner as compressed bitmap images. In addition, multiple GPU cards arenow supported on Windows Server 2012 as well as using a software GPU.■■ GPU Virtualization  Exposes a virtual graphics device to a virtual machine runningon a RemoteFX-capable host and allows multiple virtual desktops to share the singleGPU on the host.RemoteFX can benefit organizations by enabling flexible work scenarios like ­hot-deskingand working from home. By making the virtual desktop experience similar to that of­traditional PCs, RemoteFX can make VDI a more feasible solution for organizations who wantincreased data security and simplified management of the desktop environment.RemoteFX has been enhanced in Windows Server 2012 in a number of different ways,including the following:■■ RemoteFX is integrated throughout the RDS role services instead of being installedas its own separate role service and is installed automatically whenever the Remote­Desktop Virtualization Host role service is installed.
  • 225. 216 Chapter 5 Enabling the modern ­workstyle■■ The performance when delivering streaming media content over RDP has been greatlyimproved.■■ RemoteFX can dynamically adapt to changing network conditions by using multiplecodecs to optimize how content is delivered.■■ RemoteFX can choose between Transmission Control Protocol (TCP) and User­Datagram Protocol (UDP) to optimize performance when sending RDP traffic over theWAN (this is called RemoteFX for WAN).■■ Support for multi-touch gestures and manipulations in remote sessions is included.■■ Improved multimonitor support over RDP, which allows a virtual machine to supportup to four monitors regardless of their resolution, is available.■■ There is now the ability to use VMConnect to manage virtual machines that have theRemoteFX 3D Video Adapter installed in them. (In the previous version of WindowsServer, you had to use a Remote Desktop connection to manage the virtual machines.)Configuring RemoteFXTo use RemoteFX, the host machine must:■■ Support hardware-assisted virtualization and data execution prevention (DEP)■■ Have at least one GPU listed as supporting RemoteFX in the Windows Server Catalog■■ Have a CPU that supports Second Level Address Translation (SLAT). Note that Intel refers toSLAT as Extended Page Tables (EPT), whereas AMD refers to SLAT as Nested Page Tables (NPT)To configure a Windows Server 2012 host to use RemoteFX, you can use the new GPUmanagement interface in the Hyper-V settings of the host (see Figure 5-4). This interface letsyou select from a list of available GPUs on the host that are RemoteFX-capable (if any) andthen enable or disable RemoteFX functionality for the selected GPU. The interface also showsthe details concerning each RemoteFX-capable GPU on the host.Learn moreFor more information about RemoteFX in Windows Server 2012, see the following topics inthe TechNet Library:■■ “Remote Desktop Services Technical Preview” at■■ “Microsoft RemoteFX” at
  • 226. Full Windows experience Chapter 5 217FIGURE 5-4  Configuring RemoteFX on a Hyper-V host running Windows Server 2012.Enhanced USB redirectionUSB redirection in RemoteFX is an important ingredient in establishing parity of ­experiencebetween virtual desktops and traditional PCs. USB redirection was first introduced in­Windows 7 Service Pack 1 and Windows Server 2008 R2 Service Pack 1 to support RemoteFXVDI scenarios. USB redirection occurs at the port protocol level and enables redirectionof a wide variety of different types of universal serial bus (USB) devices, including printers,­scanners, webcams, Voice over Internet Protocol (VoIP) headsets, and biometric devices. USBredirection does not require hardware drivers to be installed on the virtual machines. Instead,the necessary drivers are installed on the host.In Windows 7 SP1 and Windows Server 2008 R2 SP1, RemoteFX USB redirection wassupported only within virtual desktops running Remote Desktop Virtualization Host. Newin Windows Server 2012 and Windows 8 is support for USB redirection for Remote DesktopSession Host. This enables new kinds of scenarios where RemoteFX can bring a richer desktopexperience for businesses that implement session virtualization solutions.
  • 227. 218 Chapter 5 Enabling the modern ­workstyleOther enhancements to USB redirection in Windows Server 2012 include the following:■■ USB redirection for Remote Desktop Virtualization Host no longer requires installingthe RemoteFX 3D Video Adapter on the virtual machine.■■ USB redirection for Remote Desktop Session Host is isolated to the session in whichthe device is being redirected. This means that users in one session will not be able toaccess USB devices redirected in a different session.Learn moreFor more information about RemoteFX USB redirection in Windows Server 2012, see the­following topics in the TechNet Library:■■ “Remote Desktop Services Technical Preview” at ­­ ■■ “Microsoft RemoteFX” at Profile DisksPreserving the user state is important in both session virtualization and VDI environments.Users who have worked in traditional PC environments are used to being able to ­personalizetheir desktop environment and applications by configuring settings such as desktop­backgrounds, desktop shortcuts, application settings, and other customizations. When thesesame users encounter session virtualization or VDI environments, they expect the same­personalization capabilities that traditional PCs provide.In previous versions of Windows Server, preserving user state information for sessionsand virtual desktops required using Windows roaming technologies like RUPs and FR. This­approach had certain limitations, however. For one thing, implementing RUP and FR addsmore complexity to deploying RDS for session virtualization or VDI. And for VDI deploymentsin particular, RUP/FR restricted the solution to using personal virtual desktops because pooledvirtual desktops did not support preserving user state with RUP/FR.Other problems could arise when using RUP/FR with RDS in previous versions of WindowsServer. For example, if the user’s RUP was accidentally used outside the RDS environment,data could be lost, making the profile unusable. RUP/FR could also increase the time it takesfor a user to log on to a session or virtual desktop. Finally, applications that were poorlydesigned and didn’t write user data and settings to the proper location might not function asexpected when RUP/FR was used as a roaming solution.Windows Server 2012 solves these problems with the introduction of User Profile Disks,which store user data and settings for sessions and virtual desktops in a separate VHD file thatcan be stored on a network share.
  • 228. Full Windows experience Chapter 5 219Configuring User Profile DisksConfiguring a user profile disk for a virtual desktop collection is done when you create thecollection. Before you do this however, you need to create a server message block (SMB) fileshare where your user profile disk will be stored on the network and configure permissions onthe file share so the computer account of your host has at least write access.Begin by starting the Create Collection wizard by clicking Create Virtual Desktop­Collections on the Overview page of the Remote Desktop Services section of Server ­Manager(see Figure 5-3 earlier in this chapter). Then on the Specify User Profile Disks page of the­Create Collection wizard, make sure Enable User Profile Disks is selected and type the ­UniversalNaming ­Convention (UNC) path to the file share where you’ll store your user profile disks on thenetwork:Once your new collection has been created, you can further configure your user profiledisk settings by selecting the collection on the Collections page of the Remote Desktop­Services section of Server Manager, clicking the Tasks control in the Properties area, and­clicking Edit Properties:
  • 229. 220 Chapter 5 Enabling the modern ­workstyleOn the Virtual Desktop Collection page of the properties of your collection, you can­customize how your user profile disk will be used. By default, all user profile data and ­settingsare stored on the user profile disk, but you can configure these settings by selecting ­foldersthat should be excluded from being stored on your user profile disk. Alternatively, youcan configure which specific types of items should be stored on your user profile disk; for­example, only the user’s Documents folder and user registry data:
  • 230. Enhanced security and compliance Chapter 5 221Learn moreFor more information about user profile disks, see the topic “Remote Desktop Services ­TechnicalPreview” in the TechNet Library at see “Understand and Troubleshoot Remote Desktop Services in WindowsServer ‘8’ Beta,” which can be downloaded from­download/details.aspx?id=29004.Enhanced security and complianceSecurity and compliance are two areas that have been significantly extended in WindowsServer 2012. Dynamic Access Control now allows centralized control of access and auditingfunctions. BitLocker Drive Encryption has been enhanced to make it easier to deploy, manage,and use. And implementing Domain Name System Security Extensions (DNSSEC) to safeguardname resolution traffic can now be performed using either user interface (UI) wizards or­Windows PowerShell. This concluding section covers these new features and enhancements.Dynamic Access ControlControlling access and ensuring compliance are essential components of IT systems in today’sbusiness environment. Windows Server 2012 includes enhancements that provide improvedauthorization for file servers to control and audit who is able to access data on them. Theseenhancements are described under the umbrella name of Dynamic Access Control and enableautomatic and manual classification of files, central access policies for controlling access tofiles, central audit policies for identifying who accessed files, and the application of RightsManagement Services (RMS) protection to safeguard sensitive information.Dynamic Access Control is enabled in Windows Server 2012 through the following new features:■■ A new authorization and audit engine that supports central policies and can processconditional expressions■■ A redesigned Advanced Security Settings Editor that simplifies configuration of­auditing and determination of effective access.■■ Kerberos authentication support for user and device claims■■ Enhancements to the File Classification Infrastructure (FCI) introduced previously inWindows Server 2008 R2■■ RMS extensibility to allow partners to provide solutions for applying Windows ­Server–based RMS to non-Microsoft file typesImplementing Dynamic Access Control in your environment requires careful planning andthe performing of a number of steps that include configuring Active Directory, setting upa file classification scheme, and more. For a full description of what’s involved in deployingDynamic Access Control, see the “Understanding and Troubleshooting Guide” referenced inthe “Learn more” section at the end of this topic.
  • 231. 222 Chapter 5 Enabling the modern ­workstyleJust to give you a taste, however, let’s look briefly at the redesigned Advanced SecuritySettings Editor that simplifies the configuration of auditing and determination of effectiveaccess. As in previous versions of Windows, the advanced permissions for a file or folder canbe opened from the Security tab of the Properties dialog box for the file or folder. As youcan see here, the Permissions tab of the Advanced Security Settings Editor in Windows Server2012 and Windows 8 looks fairly similar to the one in previous versions of Windows:However, the Effective Permissions tab of the Advanced Security Settings Editor in earlierversions of Windows has been replaced with a tab named Effective Access, which lets you choosenot only the user or group being used for accessing the file or folder, but also the device:
  • 232. Enhanced security and compliance Chapter 5 223The Auditing tab of the Advanced Security Settings Editor in earlier versions of Windowshas been completely redesigned and now allows you to add auditing entries such as the oneshown below that can include conditions to limit their scope:For more information on these user interface improvements, see the following sidebar.New Effective Access user interfaceWindows Server 2012 provides an improved way for administrators to helpresolve authorization problems. The new Advanced Security Settings­Editor provides a new Effective Access tab that shows simulated access results ofa user, computer, or group against targeted resources like a files or folder. Thenewly ­designed Effective Access tab provides substantial improvements over its­predecessor, the Effective Permissions tab, in the following ways:■■ Simulates access accurately, both locally and remotely■■ Evaluates conditional permission entries, Share permissions, and Central Access Policies■■ Enables administrators to insert user and device claims before evaluating access■■ Enables administrators to delegate troubleshooting access issuesThe Advanced Security Settings editor remotely tells a file server to simulate alogon of the user and device selected, inserts additional user and device claims inthe evaluation, and gathers permissions from the file system, share, and Central­Access Policies.
  • 233. 224 Chapter 5 Enabling the modern ­workstyleThe Effective Access tab represents the easiest way to diagnose problems with usersaccessing files and folders on Windows Server 2012 file servers. Use the results from theEffective Access tab to determine which aspect of access control to ­troubleshoot next.Typically, the Effective Access tab identifies possible problems with red X’s in theAccess Limited By column.The Effective Access dialog box’s Access Limited By column for file system resources canshow Share, File Permissions, and the names of any Central Access Policy that applies tothe file folder on the file server. The Access Limited By column indicates the point ofaccess control that Windows perceives is responsible for limiting access to files or folders.The Effective Access tab lists all points of access control that limits the specifiedpermission for the designated security principal (and device, optionally). Therefore,each entry in the Access limited by column can show one or more limitations. Eachlimitation listed either specifically limits the security principal’s access or does notprovide access to the security principal.For example, a security principal that is implicitly denied access occurs when noneof the points of access control provides access. In this scenario, the Effective Accesstab shows limitations for all points of access control (Share, File Permissions, andCentral Access Policies applied to the folder). Each point of access control requiresinvestigation to ensure that it allows the security principal the designated access.Mike StephensSr. Support Escalation Engineer, Windows Distributed SystemsLearn moreFor more information about Dynamic Access Control in Windows Server 2012, see thefollowing topics in the TechNet Library:■■ “Dynamic Access Control Technical Preview” at■■ “Dynamic Access Control: Scenario Overview” at■■ “What’s New in Security Auditing” at see “Understand and Troubleshoot Dynamic Access Control in Windows Server ‘8’ Beta,”which can be downloaded from enhancementsBitLocker Drive Encryption is a data protection feature first introduced in Windows Vista andWindows Server 2008. BitLocker encrypts entire disk volumes to help safeguard sensitivebusiness data from theft, loss, or inappropriate decommissioning of computers.
  • 234. Enhanced security and compliance Chapter 5 225BitLocker has been enhanced in several ways in Windows Server 2012 and Windows 8:■■ It’s now easy to provision BitLocker before deploying the operating system onto ­systems.This can be done either from the Windows Preinstallation Environment (WinPE) or byusing Microsoft Deployment Toolkit (MDT) 2012 to deploy your ­Windows installation.■■ The process of encrypting a volume with BitLocker can occur more rapidly in WindowsServer 2012 and Windows 8 by choosing to encrypt only the used disk space insteadof both used and unused disk space, as was the only option in previous versions ofWindows (see Figure 5-5).■■ Standard users can change their BitLocker personal identification number (PIN) orpassword for the operating system volume or the BitLocker password for fixed datavolumes. This change makes it easier to manage BitLocker-enabled clients because itmeans that users can choose PINs and passwords that are easier for them to ­remember.■■ A new feature called BitLocker Network Unlock allows a network-based key protectorto be used for automatically unlocking BitLocker-protected operating system volumeson domain-joined computers when these computers are restarted. This can be usefulwhen you need to perform maintenance on computers and the tasks that you need toperform require a restart to be applied.■■ BitLocker supports a new kind of enhanced storage device called Encrypted HardDrive, which offers the ability to encrypt each block on the physical drive and not justvolumes on the drive.■■ BitLocker can now be used for failover clusters and cluster shared volumes.FIGURE 5-5  Encrypting only used disk space when enabling BitLocker on a volume.Learn moreFor more information about BitLocker in Windows Server 2012 and Windows 8, see the­following topics in the TechNet Library:■■ “What’s New in BitLocker” at
  • 235. 226 Chapter 5 Enabling the modern ­workstyle■■ “Encrypted Hard Drive” at see “Understand and Troubleshoot BitLocker in Windows Server ‘8’ Beta,” which canbe downloaded from Name System Security Extensions (DNSSEC) is a suite of extensions that adds securityto the DNS protocol. DNSSEC enables all the records in a DNS zone to be cryptographicallysigned and provides origin authority, data integrity, and authenticated denial of existence.DNSSEC is important because it allows DNS servers and resolvers to trust DNS responses byusing digital signatures for validation to ensure that the responses they return have not beenmodified or tampered with in any way.DNSSEC functionality was first included in the DNS Server role of Windows Server 2008 R2and has been significantly enhanced in Windows Server 2012. The following are a few of theenhancements included in DNSSEC on Windows Server 2012:■■ Support for Active Directory–integrated DNS scenarios, including DNS dynamic­updates in DNSSEC signed zones■■ Support for updated DNSSEC standards, including NSEC3 and RSA/SHA-2 and­validation of records signed with updated DNSSEC standards (NSEC3, RSA/SHA-2)■■ Automated trust anchor distribution through Active Directory with easy extraction ofthe root trust anchor and automated trust anchor rollover support per RFC 5011■■ An updated user interface with deployment and management wizards■■ Windows PowerShell support for configuring and managing DNSSECConfiguring DNSSEC on your DNS servers can now be done with the DNS Manager­console. Simply right-click a zone and select Sign The Zone under the DNSSEC menu option:
  • 236. Conclusion Chapter 5 227This opens the Zone Signing Wizard, and by following the prompts, you can select the KeyMaster for the zone, configure a Key Signing Key (KSK) used for signing other keys, ­configurea Zone Signing Key (ZSK) used for signing the zone data, configure Next Secure (NSEC)resource records to provide authenticated denial of existence, configure distribution of TrustAnchors (TAs) and rollover keys, and configure values for DNSSEC signing and polling:Learn moreFor more information about DNSSEC in Windows Server 2012, see the topic “ Domain NameSystem (DNS) overview” in the TechNet Library at see “Understand and Troubleshoot DNS Security Extensions (DNSSEC) in WindowsServer ‘8’ Beta,” which can be downloaded from hope that you’ve enjoyed this book, which has provided you with a technical overviewof many of the exciting new features and enhancements now available in Windows Server2012. But the best way of getting to know what Windows Server 2012 is really capable ofis to try it out! So why not visit the Microsoft Server and Cloud Platform home page todayat, download an evaluation­version of Window Server 2012, and put it through its paces. We’re sure you’ll be amazed!—Mitch Tulloch with the Windows Server Team
  • 237. 229IndexSymbols and Numbers[ : : ]:Port, 165_ (underscore), 1710.0.0.0:Port, 165512e emulation method, 60802.1p, 46–47AAAA (Authentication, Authorization, and Access­Control), 82Absolute minimum bandwidth, 47Access. See Broad network access; Remote access;SecurityAccess Control Lists (ACLs), 20, 30, 50ACLs (Access Control Lists), 20, 30, 50Active Directory, 86, 96–98, 141, 147–151Active Directory Administrative Center (ADAC), 149–150Active Directory Certificate Services (AD CS), 86Active Directory Domain Services ConfigurationWizard, 147Active Directory Federation Services (AD FS), 11, 81, 86Active Directory Recycle Bin, 150Active Directory Rights Management Services(AD RMS), 15, 86AD CS (Active Directory Certificate Services), 86AD FS (Active Directory Federation Services), 11, 81, 86AD RMS (Active Directory Rights Management­Services), 15, 86ADAC (Active Directory Administrative Center), 149–150Add Roles and Features Wizard, 143, 154, 167, 195, 207Add Servers, 144Add-PswaAuthorizationRule cmdlet, 155Add-VMNetworkAdapterACL cmdlet, 50Adprep.exe, 147Advanced Configuration deployment scenario, 194Affinityprocessor, 161User-Device, 191, 212All Servers section, Service Manager, 144Anti-Affinity, 90, 103AnyNode tasks, 98APIs (Application programming interfaces), 188Application Initialization, 175–176, 184–186Application pools, 161–162, 172–175Application programming interfaces (APIs), 188Application Request Routing (ARR) module, 87ApplicationHost config file, 175, 183–184Applications, server, 87–88ARR (Application Request Routing) module,, 14, 186Assigned memory, 63–64Audit policies, 15, 221, 223Authentication, Authorization, and Access Control(AAA), 82Authorization, granting, 155, 223–224Automation, 141Auto-start property, 102Availability. See also also High availability solutionscontinuous, 88–91, 139enhancements to, 12hardware requirements for, 85–86of Dynamic Host Configuration Protocol servers,129–130BBackup and restore solutions, 73, 86, 88, 92Backup power, 85
  • 238. 230Bandwidth managementBandwidth management, 26, 45–48, 66–67Behind An Edge Devices, 198Binaries, 129, 147BitLocker Drive Encryption, 92, 221, 224–226Boot storms, 102Branch Office Direct Printing, 191, 214–215BranchCache, 15, 191, 213Broad network access, 8–9BYOD environments, 14CCapturing extensions, 22CAU (Cluster-Aware Updating), 90, 107–111CCSPort, 165CCS (Central Certificate Service), 170–172Central Certificate Service (CCS), 170–172Centralized Certificate node, 168–169Centralized SSL Certificate Support, 14, 165–172CertificatesSSL, 163–172wildcard, 165, 171Chkdsk, 88, 124–125Cloned domain controllers, 148–149Cloud computingattributes of, 8–10business reasons for choosing, 1business requirements of, 8components of, 6–8service models for, 4–7technical requirements for, 6–8Cloud providers, 31–32, 46Cluster Service, 97–98, 100, 103Cluster Shared Volumes File System(CSVFS), 92Cluster Shared Volumes Version 2 (CSVv2), 18, 67, 89,92, 97–98Cluster-Aware Updating (CAU), 90, 107–111Clustersguest, 90–91, 115host, 90migration of, 94placement policies for, 100shared disk architecture and, 115validation, 94–95ClusterWide tasks, 98Collections, virtual desktop, 206, 210–211, 219–220Competitive product analysis, 139–140Compute virtualization, 17, 90Configuration Editor, 183–186ConfiguringApplication Initialization, 175bandwidth settings, 26cluster tasks, 98dynamic IP address filtering, 177–178Dynamic Memory, 61–62FTP Logon Attempt Restrictions, 181–183Hyper-V Replica Broker, 78–80network metering port ACLs, 50remote access, 198–200RemoteFX, 216–217scale-out file servers, 92–93SNI, 164SSL certificate storage, 167–170User Profile Disks, 219–220User-Device Affinity, 212virtual switches, 23–24Windows NIC Teaming, 121–123Connection Broker, 210Connectivitycross-premises, 11, 81remote, 192–194Consolidation, server, 8Contentcached, 213help, 156static, 175Continuous availability, 88–91, 139Converged networks, 46Cores, CPU, 160–161Costs, reduction of, 1–2, 13, 130–139CPU sockets, 160CPU throttling, IIS, 14, 172–174CPUs, 160–161Create Collection Wizard, 219–220Creatingserver groups, 145storage pools, 132–133Virtual Fibre Channel Storage Area Networks (SANs),113–115CSVFS (Cluster Shared Volumes File System), 92CSVv2 (Cluster Shared Volumes Version 2), 18, 67, 89,92, 97–98Customer addresses (CA), 33
  • 239. 231Fake requestsDDashboard, Server Manager, 141–142Data Center Bridging (DCB), 45, 48Data deduplication, 9, 94, 213Data protection, 11, 224–226Data providers, 25Data transfer, 58–59Datacentershardware requirements for availability in, 85–86IP addressing issues and, 32operational challenges in, 35–36DCB (Data Center Bridging), 45, 48Dcpromo.exe, 147Delegated administration, 156Denial-of-Service (DoS), 82, 180DeploymentDirectAccess, 194–200domain controller, 147–148domain controllers, 147–148Quick start, 205, 207–208, 210scenarios, 194, 206, 208types, 205, 208Virtual desktop infrastructure (VDI), 204–210DFS-R (Distributed File Services Replication), 94DHCP. See Dynamic Host Configuration Protocol (DHCP)Diagnostics, enhanced, 26Differentiated Services Code Point (DSCP), 47DirectAccess, 15, 191–204advantages over VPNs, 202–203connection properties, 193–194deployment, 194–200overview, 192–193Disaster recovery, 18, 39–40, 73–81, 86Diskslarge-sector, 59–60types of, supported by Storage Spaces feature,131–132VHDX format, 72virtual. See Virtual disksDistributed File Services Replication (DFS-R), 94DNS (Domain Name System), 86DNSSEC (Domain Name System Security Extensions),221, 226–227Domain controllers, 86, 97–98, 141, 147–149Domain Name System (DNS), 86Domain Name System Security Extensions (DNSSEC),221, 226–227DoS (Denial-of-Service), 82, 180DSCP (Differentiated Services Code Point), 47Dynamic Access Control, 221–224Dynamic Host Configuration Protocol (DHCP)Guard, 25, 27, 29Server Failover, 88, 129–130Dynamic IP Address Restrictions, 176–180Dynamic Memory, 17, 53, 60–65Dynamic quorum, 106Dynamic teaming, 121EEdge (network topology), 198Elasticity, 7, 9, 13, 82, 159Enable Replication Wizard, 76–78Encapsulation, 34Encryption, 66, 92, 221, 224–226ETL (Event Trace Log), 25–26ETW (Event Tracing for Windows) data providers, 25Event Trace Log (ETL), 25–26Event tracing, 26Event Tracing for Windows (ETW) data providers, 25Express setup deployment scenario, 194Extensible Application Markup Language (XAML), 153Extensible virtual switches. See Hyper-V ExtensibleSwitchExtensions, 21–22. See also also Domain Name SystemSecurity Extensions (DNSSEC)FFailback setting, 101Failover Cluster Manager, 42, 78–80, 92, 96, 99, 103,107–108Failover cluster nodesfailure of, 100maintenance, 106–107updating, 107–111vote weights, 106Failover clustering, 52–53, 86–88, 91–103, 111–112,129–130Failover Clustering feature, 37, 87, 91–92, 94, 111Failover, transparent, 12, 67, 88, 117Failure, solutions to avoid, 85–88Fake requests, identification of, 175–176
  • 240. 232Fast TrackFast Track, 3Feature-based installation, 195Features on Demand, 88, 129Fibre Channel. See Virtual Fibre ChannelFile and storage management, 146File Server Management Resource Management(FSRM), 94File servers, 87, 92–94, 99, 116File shares, VSS for SMB, 67Filter drivers, 21Filteringdynamic IP address, 176–178extensions, 22hardware packet, 54static IP, 176Firewalls, 179, 187Folders, shared, 38–39Forwarding extensions, 22FSRM (File Server Management ResourceManagement), 94FTP Logon Attempt Restrictions, 180–183FTP servers, 180–183GGeneric teaming, 121Get-ChildItem cmdlet, 156Get-Command cmdlet, 68, 78, 123, 152Get-NetLbfoTeam cmdlet, 123Get-VM cmdlet, 28, 49, 64Get-VMSwitchExtension cmdlet, 23GPU (Graphics Processing Units) virtualization, 215Graphics Processing Units (GPU) virtualization, 215Group Policy, 46, 193–194, 212Grouping, SSL certificate, 169–170Groups. See Server groupsGuest clustering, 87, 90–91, 111–115Guest NUMA feature, 17Guest operating systems, 11, 21, 53, 65, 111–112GUI server installation, 125–127HHackers, blocking, 180–183Handshake protocol, WebSocket, 187–188Hard affinity, 161Hardwareacceleration feature, 54–55availability requirements, 85–86NUMA-aware, 160–162HAVMs (highly available virtual machines), 90Help content, 156High availability solutionsapplications, 87backup and restore solutions for, 88infrastructure, 86–87overview, 85–88High Availability Wizard, 92–93, 112Host clustering, 90, 111Host headers, 163–164Host memory, 51–52, 100Host operating systems, 54, 107Host processors, 51–52Hosting providers, 19, 47–50HostnamePort, 165Host-side rendering, 215HTML 5, 189HTTP 101 response, 187HTTP 401 Access Denied status messages, 178HTTP 403 Access Forbidden status messages, 177–178HTTP 404 Not Found status messages, 178Hybrid clouds, 4, 13Hyper-V, 5, 8, 10, 87bandwidth management, 26, 45–48benefits of SMB 3 for, 68competitive advantages of, 140Dynamic Memory, 17, 53, 60–65GPU management interface settings, 216networking, 22quality of service (QoS), 46–48Virtual Fibre Channel, 11, 65, 90, 112–115Hyper-V Extensible Switch, 18, 21–31. See also alsoVirtual switchesHyper-V Manager, 75–78Hyper-V Replica, 18, 73–81, 87Hyper-V Replica Broker, 78–80IIaaS (Infrastructure as a service), 5–6Identity federation, 81IIS (Internet Information Services) 8.0
  • 241. 233Logical unit number (LUN)configuring SSL certificate storage in, 167–170HTML 5 in, 189partitioning, 161–162support for industry standards, 186IIS 8. See Internet Information Services (IIS) 8.0IIS Configuration Editor, 183–186IIS CPU throttling, 14, 172–174IIS Manager, 164, 168–170, 173, 177, 181, 183–186IIS Web Server, 87IIS Worker processes, 161–163ImplementingFailover Clustering, 111–112Hyper-V Replica, 75–78quality of service (QoS), 46–47Infrastructure. See also also Virtual desktop­infrastructure (VDI)cloud computing, 3compute, 90high availability, 86–91network, 90physical, 89traditional IT, characteristics of, 2Infrastructure as a service (IaaS), 5–6Install-ADDSDomain, 152Installation optionsGUI, 125–127Minimal Server Interface, 126–127remote access, 195–197Server Core, 125–126Virtual desktop infrastructure (VDI), 207–210Installation type, 195Integrated Scripting Environment (ISE), 156Internet Information Services (IIS) 8.0benefits of, 159communication in, 187–188configuring SSL certificate storage in, 167–170HTML 5 in, 189in Windows Server 2012, 14NUMA-aware scalability and, 160partitioning, 161–162Proxy mode in, 179substatus codes for Dynamic IP Restrictions,178–179support for industry standards, 186Inventory management, 203IP (Internal Protocol)addressesblocking, in FTP server attacks, 182datacenter issues and, 32–34dynamic filtering of, 176–180hosting multiple HTTPS websites using, 163–164Mac address spoofing and, 25network adapter teaming using, 90Port, 165rewrite, 34IPsec (Internet Protocol Security)protected connections, 13task offload, 56iSCSI (Internet Small Computer Systems Interface), 86,112, 131ISE (Integrated Scripting Environment), 156IT professionals and Hyper-V Extensible Switch, 22IWebSocketContext interface, 188JJobs, scheduled, 153KKerberos authentication, 15LLACP (Link Aggregation Control Protocol) mode, 121Language syntax, 156LBFO (load balancing and failover), 90, 120–124Licensing, 36Link Aggregation Control Protocol (LACP) mode, 121Live Migrationenabling functionality of, 41Hyper-V Replica and, 73improvements to, 37–38moving files using, 43–45types of, 42with shared storage, 38–40without shared storage, 41Live Migration Without Infrastructure, 41Live Storage Migration, 12, 17, 73, 119load balancing and failover (LBFO), 90, 120–124Local servers, 125Logical unit number (LUN), 65, 70, 94, 112, 115. See alsoalso Virtual disks
  • 242. 234Logon attempt restrictionsLogon attempt restrictions, FTP, 180–183LUN (Logical unit number), 65, 70, 94, 112, 115. See alsoalso Virtual disksMMAC addresses, 25Maintenance, of failover cluster nodes, 106–107Managementcluster, 94–96GUI server, 127–128inventory, 203print service, 191, 214–215remote server, 141, 153–155server, 127–128, 140–146, 156SSL certificate, 166–167Maximum Memory setting, 62Maximum Worker Processes setting, 161, 163Measured service, 8–9Measure-VMReplication cmdlet, 78Memoryassigned, 63Dynamic, 17, 53, 60–65host, 51–52, 100Most Available, 161support, 51–53Memory Buffer, 61Memory Demand, 63–64Memory Status, 63–64Memory usage, 62–65Memory Weight, 62Microsoftprivate cloud products, 4public cloud products, 3, 5–6Microsoft Dynamics CRM, 3Microsoft Exchange Server, 88Microsoft Hyper-V. See Hyper-VMicrosoft Internet Information Services. See InternetInformation Services (IIS) 8.0Microsoft Lync Server, 88Microsoft Sharepoint Server, 88Microsoft SQL Server, 88Microsoft System Center 2012, 3, 5, 7–8, 34, 88Migrate A Cluster Wizard, 94Minimal Server Interface, 126–127Minimum Memory setting, 62Mirror resiliency settings, 137Mobility, 39–40, 100Monitoringnetwork traffic, 22, 26packet, 26performance, 25virtual machine, 87, 103–106, 115Most Available Memory, 161MPIO (Multipath I/O), 86Multicore scaling, 14Multipath I/O (MPIO), 86MySQL, 14NNaming conventionsprivate key file, 170–172universal, 219National Institutes for Standards and Technology(NIST), 8NDIS (Network Driver Interface Specification)filter drivers, 21Virtual Machine Queue (VMQ), 55Network accessbroad, 8security concerns for, 83Network adaptersconfiguring settings, 26–27grouping, 90, 120–124hardware acceleration feature for, 54–55Virtual Fibre Channel, 112–114Network binding, 165Network Driver Interface Specification (NDIS)filter drivers, 21Virtual Machine Queue (VMQ), 55Network File System (NFS), 94, 130, 139Network File System (Server for NFS), 139Network interface cards (NICs), 20, 86, 160. See alsoalso Windows NIC TeamingNetwork Load Balancing (NLB), 86Network Metering Port ACLs, 50Network topology, 198–199Network traffic management, 22, 26, 45–48Network Virtualization, 11, 18, 31–37Network Virtualization Generic Routing Encapsulation(NVGRE), 34New Storage Pool Wizard, 132–133New Technology File System (NTFS), 92, 124
  • 243. 235Quorum settings, for failover clustersNew Virtual Disk Wizard, 134–135New-NetLbfoTeam cmdlet, 123NFS (Network File System), 94, 130, 139NIC Teaming. See Windows NIC TeamingNICs (Network interface cards), 20, 86, 160. See alsoalso Windows NIC TeamingNIST (National Institutes for Standards and ­Technology), 8NLB (Network Load Balancing), 86Node drain, 106–107Node vote weights, 106Non-Uniform Memory Architecture (NUMA), 14, 52,160–163NTFS (New Technology File System), 92, 124NUMA (Non-Uniform Memory Architecture), 14, 52,160–163NUMA nodes, 160NUMA-aware Scalability feature, 160–163NVGRE (Network Virtualization Generic Routing­Encapsulation), 34OObjects, creation and placement of, 96–97ODX (Offloaded Data Transfer), 58Offloaded Data Transfer (ODX), 58On demand features, 129On-demand self-service, 9, 82One-time password (OTP), 195Operating systemsguest, 11, 21, 53, 65, 111–112host, 54, 107hypervisor-based, 6–7OpEx (ongoing operational expenses), 9OTP (One-time password), 195Owner settings, VM, 100–101PPaaS (Platform as a service), 5–6Paging, smart, 62–63PAL (Performance Analysis of Logs), 25Parity, striping with, 137Partitioning, IIS, 161–162Password policies, 150, 180–182PasswordsBitlocker, 225certificate, 169one-time, 195Performance, 50–72, 160Performance Analysis of Logs (PAL), 25Performance counters, 67Performance monitoring, 25Persistent mode, 102Persistent user-managed sessions, 105, 152–153PFX files, 169PHP, 14Pipelines, 50, 187Placement policies, virtual machine, 100–103, 160Platform as a service (PaaS), 5–6Polling mechanism, 174Port ACLs, 26Port mirroring, 26Possible owners setting, 101Powershell. See Windows PowershellPreferred owners setting, 100–101Preload, 176Print service management, 191, 214–215Priorities, assignment of, 98–99Priorities, assignment of, on virtual machines (VMs), 100Private Cloud Fast Track, 3Private clouds, 3–4, 10–12benefits of Windows 2012 for, 17–19security in, 81–83shared, 31–32, 47–48Private VLANs (PVLANs), 19, 21, 26Process scheduling, 160–162Processor affinity, 161Processors, 51–53Production, moving VMs to, 118–119Provider addresses (PA), 33Provisioning storagefixed, 134, 137thin. See Thin provisioningProxy mode, 179PSSessions (Persistent user-managed sessions) cmdlet, 105Public clouds, 3, 5–6, 32PVLANs. See Private VLANs (PVLANs)QQuality of service (QoS), 11, 45–48Quick start deployment option, 205, 207–208, 210Quorum settings, for failover clusters, 106
  • 244. 236RAID (Redundant array of independent disks)RRAID (Redundant array of independent disks), 86RAM (Random Access Memory), 52, 61Random Access Memory (RAM), 52, 61RDMA (Remote Direct Memory Access), 20RDP (Remote Desktop Protocol), 205, 215–216RDS Virtual switch, 210Read Only Domain Controller (RODC), 86, 97Redundancy, 86–87Redundant array of independent disks (RAID), 86Relative minimum bandwidth, 47Remote accessconfiguring, 198–200deploying, 195DirectAccess improvements to, 192–204enhancements to, 15installation options, 195–197management of, 200–202Remote Access Management Console, 200–202Remote Desktop Connection, 141Remote Desktop Connection Broker, 210Remote Desktop Protocol (RDP), 205, 215–216Remote Desktop Servicesenhancements to, 205–206RemoteFX in, 215–216scenario-base installation, 13, 15, 205, 207–210User Profile Disks in, 219–220VDI deployment using, 210–211Remote Desktop Virtualization Host, 210, 217–218Remote Desktop Web Access, 210Remote Direct Memory Access (RDMA), 20Remote Server Administration Tools (RSAT), 141Remote server management, 141, 153–155RemoteFX, 205, 215–218over WAN, 15, 205USB redirection in, 217–218Remove-WindowsFeature User-Interfaces-Infra­command, 128Repair, of objects, 97Replication, 73–81, 87–88Resiliency settings, 136–137Resource Metering, 9, 48–50Resource pooling, 9, 82ResourceSpecific tasks, 98Responsesserver-client, 187–188setting, 177–178Restrictions, address and logon, 178–183RewriteIP, 34URL, 175Rights Management Services (RMS), 15, 86RMS (Rights Management Services), 15, 86RODC (Read Only Domain Controller), 86, 97Role-based installation, 195Router guard, 26RSAT (Remote Server Administration Tools), 141SSaaS (Software as a service), 4, 6SANs. See Storage Area Networks (SANs)SAS (Serial Attached SCSI) disks, 131SATA (Serial Advanced Technology Attachment)disks, 131Scalability, 8, 10, 14, 50–72NUMA-aware, 52–53, 160of web applications, 13–14platform, 159SSL, 167using Failover Clustering, 91Scale-Out File Server Clusters, 4, 89–90Scale-Out File Servers, 92–94, 99, 116Scenario-focused design, 20–21Schedulingjobs, 153task, 98worker processes, 160–162Scripts, generating Windows Powershell, 183–186SCSI (Serial Computer System Interface) disks, 131Secure Dialect Negotiation, 67Security, 221controlling access, 221–224DNS, 226–227drive, 224–226Hyper-V Extensible Switch enhancements, 25–26in private clouds, 81–83of data, 15on-demand self-service, 82Select Destination Server Page, 143Serial Advanced Technology Attachment (SATA)disks, 131Serial Attached SCSI (SAS) disks, 131Serial Computer System Interface (SCSI) disks, 131
  • 245. 237Switch Independent modeServer Core, 125–128, 193Server for NFS (Network File System), 139Server groups, 94–96, 145Server Manager, 141–146Add Roles and Features Wizard, 143, 154, 167,195, 207All Servers section, 144applications of, 13, 15, 18dashboard, 141–142enabling Windows NIC Teaming from, 121–123Failover Clustering feature integration with,94–96local server section, 143Remote Access Management Console in, 200–202tools menu, 145Server message block 3 (SMB 3) protocol, 18, 38–39,66–68Server Name Indication (SNI), 164–166Server pools, 144Server roles, 86–87, 129Server workloadsconsolidation of, 2, 51virtualization and, 1, 5Server-centric model, of computing, 2Serversconsolidation of, 8destination, selecting, 143–144FTP, 180–183maintaining availability of, 129–130management of, 127–128, 140–146, 156selecting, for RDS role services, 209Service models, for cloud computing, 4–7Service-centric model, of computing, 3Session virtualization, 13, 15, 205, 218Sessionsdisconnected, 152–153persistent user-managed, 105, 152–153Shared folders, 38–39Shared Nothing Live Migration, 41Show-Command cmdlet, 152, 156Simple stripes, 137Single-root I/O virtualization (SR-IOV), 56–57Smart Paging, 44, 62–63SMB 3 (Server message block 3) protocol, 18, 38–39,66–68SMB Direct, 20, 66SMB Directory Leasing, 66SMB Encryption, 66SMB Multichannel, 66–68, 89SMB Scale Out, 67SMB Transparent Failover, 67, 88, 117smpProcessorAffinityMask attribute, 161Snapshot files, 148SNI (Server Name Indication), 164–166Soft affinity, 161Software as a service (SaaS), 4, 6Software updates, 107–111Software vendors and Hyper-V Extensible Switch, 22Sourcing models, types of, 3–4Splash pages, 175Spoofing, 25SR-IOV (Single-root I/O virtualization), 56–57SSL certificatesgrouping, 169–170management, 166–167naming conventions, 170–172Server Name Indication (SNI), 163–164storing, 167–170SSL configuration, 165–166Standard deployment option, 205Start menu, 127–128Start-up, avoiding overload during, 102Static IP Restrictions, 178Static teaming mode, 121Status messages, HTTP, 177–178StorageLANs, 70–71pre-allocated, 138provisioning, 136–139requirements for availability, 86shared, 38–39, 90, 112–114SSL certificate, 167–170Storage Area Networks (SANs), 11, 39, 70–71, 89,112–115Storage arrays, ODX-capable, 58Storage devices, advanced format, 60Storage migration, 12, 41, 43–45, 88, 117–120, 205Storage poolsclustered, 137–138configuring, 131–132creating, 132–133defined, 131Storage Spaces, 13, 18, 89–90, 131–138Storage virtualization, 18, 131–138Substatus codes, IIS, 178–179Switch Independent mode, 121
  • 246. 238SwitchesSwitches. See Hyper-V Extensible Switch;Virtual switchesSystem Center 2012. See Microsoft SystemCenter 2012TTask offload, IPsec, 56Task Scheduler, 98Teaming, NIC. See Windows NIC TeamingTenant networks, 26Thin provisioning, 9, 134, 138–139Throttle configuration option, 174ThrottleUnderLoad configuration option, 174Throttling, CPU, 172–174Tracing, 24–25Tracking. See Resource meteringTrim storage, 138–139Trunk mode, 26UUAG (Unified Access Gateway), 203UNC (Universal Naming Convention), 219Underscore (_), 171Unified Access Gateway (UAG), 203Unified Tracing, 24, 26Universal Naming Convention (UNC), 219Universal serial bus (USB), 15Universal serial bus (USB) disks, 131Universal serial bus (USB) redirection, 205, 217–218Updatescluster, 90, 107–111remote computer, 203software, 107–111URL Rewrite, 175USB (Universal serial bus), 15USB (Universal serial bus) disks, 131USB (Universal serial bus) redirection, 205, 217–218User experience, 15, 32, 45, 192, 215–221User Profile Disks, 205–206, 218–221User-Device Affinity, 191, 212Usersblocking malicious, 180–183granting authorization, 155, 223–224roaming, 191VValidate A Configuration Wizard, 94–95Validation, 94–95, 147, 226VDI. See Virtual desktop infrastructure (VDI)Vendors, independent software, 22VHDs (Virtual hard disks), 15, 72, 143VHDX disk format, 72Virtual desktop infrastructure (VDI), 204–212deployment, 204–210management of, 210–211user experience and, 15Virtual desktopsmanaged/unmanaged collections, 206–207personal, 206–207pooled, 206–207templates for, 209–210Virtual Disk Wizard, 133–135Virtual diskscreating, using a storage pool, 133–135functioning, 131provisioning, using Windows Powershell, 136–138Virtual Fibre Channel, 11, 65, 90, 112–115Virtual hard disks (VHDs), 15, 72, 143Virtual local area networks (VLANs)Hyper-V and, 11isolated, 26, 32limitations of, 32private, 19, 21, 26Virtual Machine Manager 2012 Service Pack 1, 34Virtual Machine Queue (VMQ), 54–55Virtual machines (VMs)continuous availability of, 19, 89–91highly available, 90importing, 71–72memory usage and restart of, 62–63monitoring, 87, 103–106, 115moving, to production, 118–119NUMA-aware, 17, 52–53, 99–100, 160placement policies for, 100–103priority assignment, 100start-up of, 102Virtual private networks (VPNs), 11, 15, 81, 191–193,202–203Virtual Switch Manager, 23–25, 56Virtual switches, 21–31configuring, 23–24extensible, 21–23, 25–27
  • 247. 239X-Forwarded-For HTTP headersRDS, 210troubleshooting, 24–25Windows Powershell cmdlets, 23–24Virtualizationcompute, 17, 90Graphics Processing Units (GPU), 215high-density, 8Network, 11, 18, 31–37of domain controllers, 148Session, 13, 15, 205, 218single-root I/O, 56–57storage, 18, 131–138VLANs. See Private VLANs (PVLANs); Virtual local areanetworks (VLANs)VM Monitoring, 87, 105–106, 115VMConnect, 216VMQ (Virtual Machine Queue), 54–55VMs. See Virtual machines (VMs)VMware vSphere, 115, 139Volumes, creating, using a storage pool,131–132, 135VPNs (Virtual private networks), 11, 15, 81, 191–193,202–203VSS for SMB file shares, 67WWarm-up periods, 175–176WAS (Windows Process Activation Service), 161WAU (Windows Update Agent), 109Web access, 153–155Web farms, 166–167, 170Web gardens, 161–163Web.config file, 175Websites, adding, to IIS Manager, 170WebSocket, 14, 187–189Weight, node vote, 106WFP (Windows Filtering Platform), 22–23Wildcard certificates, 165, 171Windows 2012, 20–21, 141Windows Filtering Platform (WFP), 22–23Windows Management Framework 3.0, 151Windows NIC Teaming, 12, 67, 88, 90, 120–124configuring, 121–123modes of, 121Windows Powershell, 18, 148, 227advanced networking features using, 27–30cmdletsfor configuring clustered task, 98for enabling NIC teaming, 123for Failover Clustering, 116for managing properties of network adapters,55–56, 67domain controller deployment, 148extensible switches, 23–24, 27–30managing SMB 3 using, 68provisioning storage, 136–138QoS implementation and, 47replication using, 78scripts, generating, using IIS Configuration Editor,183–186Server Core installation command, 125–126virtual switches, 23–24VM monitoring using, 105–106Windows Powershell 3.0, 13, 151–157Windows Powershell History, 150Windows Powershell Web Access, 153–155Windows Powershell Workflows, 13, 153Windows Process Activation Service (WAS), 161Windows Server 2008 R2guest clusters in, 115Service Pack 1, 51Windows Server 2012competitive product analysis and, 139–140deprecated features in, 59key features of, 10–11Windows Update Agent (WAU), 109Windows Workflow Foundation, 153Worker processes, IIS, 161–163Workflows, Windows Powershell, 13, 153Workloads, 90–91, 161WorldWide Names (WWNs), 112WWNs (WorldWide Names), 112XXAML (Extensible Application Markup Language), 153X-Forwarded-For HTTP headers, 177, 179
  • 248. About the AuthorMitch Tulloch is a well-known expert on Windowsadministration, deployment, and virtualization. He haspublished hundreds of articles on a wide variety oftechnology sites and has written more than two dozenbooks, including the Windows 7 Resource Kit ­(MicrosoftPress, 2009), for which he was lead author; and­Understanding Microsoft Virtualization Solutions: From theDesktop to the Datacenter (Microsoft Press, 2010), a freeebook that has been downloaded over 140,000 times.Mitch is also Senior Editor of WServerNews, the world’s largest newsletter­focused on system admin and security issues for Windows servers. Publishedweekly, WServerNews helps keep system administrators up to date on newserver and security-related issues, third-party tools, updates, upgrades, Windows­compatibility matters, and related issues. With more than 100,000 subscribersworldwide, WServerNews is the largest Windows Server–focused newsletter inthe world.Mitch has been repeatedly awarded Most Valuable Professional (MVP) ­statusby Microsoft for his outstanding contributions to supporting the global IT­community. He is an eight-time MVP in the technology area of Windows ServerSetup/Deployment.Mitch also runs an IT content development business based in Winnipeg,­Canada, which produces white papers and other collateral for the business­decision maker (BDM) and technical decision maker (TDM) audiences. His­published content ranges from white papers about Microsoft cloud technologiesto reviews of third-party products designed for the Windows Server platform.Before starting his own business in 1998, Mitch worked as a Microsoft CertifiedTrainer (MCT) for Productivity Point.For more information about Mitch, visit his website ( can also follow Mitch on Twitter at
  • 249. What doyou think ofthis book?We want to hear from you!To participate in a brief online survey, please visit:Tell us how well this book meets your needs­—what works effectively, and what we cando better. Your feedback will help us continually improve our books and learningresources for you.Thank you in advance for your input! 1 5/19/2011 4:18:12 PM