Windows server 2012 technical overview networking student manual

1,135 views
1,089 views

Published on

Published in: Technology
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,135
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
87
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide

Windows server 2012 technical overview networking student manual

  1. 1. 2
  2. 2. 4
  3. 3. Cloud and mobility are two major trends that have started to affect the IT landscape in general, and the datacenter in particular. There are four key IT questions that customers claim are keeping them up at night: • How do I embrace the cloud? With a private cloud, you get many of the benefits of public cloud computing— including self-service, scalability, and elasticity—with the additional control and customization available from dedicated resources. Microsoft customers can build a private cloud today with Windows Server, Microsoft Hyper-V, and Microsoft System Center, but there are many questions about how to best scale and secure workloads on private clouds and how to cost-effectively build private clouds, offer cloud services, and connect more securely to cloud services. • How do I increase the efficiency in my datacenter? Whether you are building your own private cloud, are in the business of offering cloud services, or simply want to improve the operations of your traditional datacenter, lowering infrastructure costs and operating expenses while increasing overall availability of your production systems is critical. Microsoft understands that efficiency built into your server platform and good management of your cloud and datacenter infrastructure are important to achieving operational excellence. • How do I deliver next-generation applications? Windows Server Management Marketing 8/29/2012 5
  4. 4. As the interest in cloud computing and providing web-based IT services grows, our customers tell us that they need a scalable web platform and the ability to build, deploy, and support cloud applications that can run on-premises or in the cloud. They also want to be able to use a broad range of tools and frameworks for their next- generation applications, including open source tools. • How do I enable modern work styles? As the lines between people’s lives and their work blur, their personalities and individual work styles have an increasing impact on how they get their work done— and which technologies they prefer to use. As a result, people increasingly want a say in what technologies they use to complete work. This trend is called “Consumerization” of IT. As an example of “consumerization,” more and more people are bringing and using their own PCs, slates, and phones to work. “Consumerization” is great as it unleashes people’s productivity, passion, innovation, and competitive advantage. We at Microsoft believe that there is power in saying “yes” to people and their technology requests in a responsible way. Our goal at Microsoft is to partner with you in IT, to help you embrace these trends while ensuring that the environment is more secure and better managed. 5
  5. 5. Optimize your IT for the cloud with Windows Server 2012 When you optimize your IT for the cloud with Windows Server 2012, you take advantage of the skills and investment you’ve already made in building a familiar and consistent platform. Windows Server 2012 builds on that familiarity. With Windows Server 2012, you gain all the Microsoft experience behind building and operating private and public clouds, delivered as a dynamic, available, and cost-effective server platform. Windows Server 2012 delivers value in four key ways: 1. It takes you beyond virtualization. Windows Server 2012 offers a dynamic, multitenant infrastructure that goes beyond virtualization technology to a complete platform for building a private cloud. 2. It delivers the power of many servers, with the simplicity of one. Windows Server 2012 offers you excellent economics by integrating a highly available and easy-to- manage multiple-server platform. 3. It opens the door to every app on any cloud. Windows Server 2012 is a broad, scalable, and elastic web and application platform that gives you the flexibility to build and deploy applications on-premises, in the cloud, and in a hybrid environment through a consistent set of tools and frameworks. 4. It enables the modern workstyle. Windows Server 2012 empowers IT to provide users with flexible access to data and applications anywhere, on any device, and 6
  6. 6. while simplifying management and maintaining security, control, and compliance. With Windows Server 2012, Microsoft has made significant investments in each of these four areas that allow customers to take their datacenter operations to the next level. Now, let’s take a look how Windows Server 2012 helps customers to: • Build and deploy a modern datacenter infrastructure • Build and run modern applications • Enable modern work styles for their end users 6
  7. 7. I would like to spend a few minutes and go over some of the needs that IT Pros and IT DMs have today and the challenges they face fulfilling their needs. First up a lot of customers are looking at what we call dynamic datacenters – datacenters that grow beyond one physical geographic location – either into other datacenters they have, a hosting providers cloud or in the public cloud like Azure. They need infrastructure that scales to growing demand and changes, that supports VM movement across datacenter. There is a need to simplify the network complexity involved in migrating VMs – from changing IP addresses, modifying applications, changing network ACLs and so on. The server surge has also put a great onus on having a simplified manageability layer so that you get to manage several servers as if you are managing one. Complexity in storage and networking increase mainly to deliver services that are continuously available so that the guaranteed SLAs are met. A pulled plug, a failed adapter, a switch issue can bring the service down. A network infrastructure is resilient to such failures is becoming increasingly important. The challenges involved here are lost revenue because of service downtime or not maintaining SLAs – the ability to guarantee a bandwidth either a ceiling or a floor and building chargeback solutions depending on the bandwidth. Also, the software they have should support a multi vendor model and ability to support rapid changes that the hardware industry is going through and deliver the best performance be it with standard hardware today or some of the emerging technologies. Page 7 Every App, Any Cloud Scalable and Elastic Application Platform Overview Windows Server 2012
  8. 8. 8
  9. 9. If you consider the challenges and needs we saw in the previous slide, you can split those into five different scenarios. Delivering continuously available services – ability to make the application resilient to underlying hardware failure and maintaining high SLAs. Reduce the complexity of network infrastructure and helping migration across clouds and IP portability easier. Having the ability to improve network and system performance with industry standard hardware and being ready for next wave of hardware improvements. Finally having a great management layer that lets you manage both your system and network infrastructure with the simplicity of managing just a single server. Finally we will walk through work we have done with our partner ecosystem to enable a diverse set of choices to better control your network infrastructure. 9
  10. 10. Based on the needs and challenges, these are the top features we have built in Windows Server 2012 in networking. We will spend a lot of time over the next hour going over these features. This is not the entire set of features we have added in WS 2012, but what I consider the most important. 10
  11. 11. Now drilling into the continuously available services pillar, we will walk through the work we have done to help ensure that services are running continuously without any interruption. That there is automatic recovery from both software and hardware failures and the need for an IT Pro or network administrator to fix issues in the middle of night is now eliminated. Imagine multiple services sharing common infrastructure and having the ability to get a consistent bandwidth for each of these services and finally provide a common infrastructure that supports a heterogeneous/multi vendor environment. Lets walk through a bunch of features that enables these .. 11
  12. 12. Note to presenter: 3 clicks to complete build. Windows Server 2012 helps you provide fault tolerance on your network adapters without having to buy additional hardware and software. Windows Server 2012 includes NIC Teaming as a new feature, which allows multiple network interfaces to work together as a team, preventing connectivity loss if one network adapter fails. It allows a server to tolerate network adapter and port failure up to the first switch segment. NIC Teaming also allows you to aggregate bandwidth from multiple network adapters, for example, so four 1-gigabyte (GB) network adapters can provide an aggregate of 4 GB/second of throughput. The advantages of a Windows teaming solution are that it works with all network adapter vendors, spares you from most potential problems that proprietary solutions cause, provides a common set of management tools for all adapter types, and is fully supported by Microsoft. Teaming network adapters involves the following: NIC Teaming configurations. Two or more physical network adapters connect to the NIC Teaming solution’s multiplexing unit and present one or more “virtual adapters” (team network adapters) to the operating system. Algorithms for traffic distribution. Several different algorithms distribute inbound and outbound traffic between the network adapters. Team network adapters exist in third-party NIC Teaming solutions to divide traffic by virtual local area network (VLAN) so that applications can connect to different VLANs simultaneously. Like other commercial implementations of NIC Teaming, Windows Server 2012 has this capability. 12
  13. 13. Note to presenter: 3 clicks to complete build. With SMB MultiChannel, network path failures are automatically and transparently handled without application service disruption. Windows Server 2012 now scans, isolates, and responds to unexpected server problems that allow network fault tolerance, if multiple paths are available between the SMB client and the SMB server. SMB Multichannel will also provide aggregation of network bandwidth from multiple network interfaces when multiple paths exist. Server applications can then take full advantage of all available network bandwidth and become resilient to a network failure. In the animation, you see data getting transferred from an SMB client and server. Notice the red ball. Now lets assume that there is a failure in the path in which the red ball/packet was travelling. Without any manual intervention now, the red ball packet was sent again through a different route. So as you can see with SMB multichannel, server workloads are now resilient to underlying network changes/failures. 13
  14. 14. Dynamic Host Configuration Protocol (DHCP) Server Failover allows two DHCP servers to synchronize lease information almost instantaneously and to provide high availability of DHCP service. If one of the servers becomes unavailable, the other server assumes responsibility for servicing clients for the same subnet. Now you can also configure failover with load balancing, with client requests distributed between the two DHCP servers. DHCP Server Failover in Windows Server 2012 provides support for two DHCPv4 servers. Administrators can deploy Windows Server 2012 DHCP servers as failover partners in either hot standby mode or load-sharing mode. Hot standby mode In this mode, two servers operate in a failover relationship in which an active server leases IP addresses and configuration information to all clients in a scope or subnet. (Note that the two DHCP servers in a failover relationship do not themselves need to be on the same subnet. DHCP service can be provided to the subnet by DHCP Relay.) The secondary server assumes this responsibility if the primary server becomes unavailable. A server is primary or secondary in the context of a subnet, so a server that is primary for one subnet could be secondary for another. The hot standby mode of operation is best suited to deployments in which a central office or data center server acts as a standby backup server to a server at a remote site 14
  15. 15. that is local to the DHCP clients. In this hub-and-spoke deployment, it is undesirable to have a remote standby server service clients unless the local DHCP server becomes unavailable. Load-sharing mode In load-sharing mode, which is the default mode, the two servers simultaneously serve IP addresses and options to clients on a particular subnet. (Again, the two DHCP servers in a failover relationship do not themselves need to be on the same subnet.) The client requests are load balanced and shared between the two servers. This mode is best suited to deployments in which both servers in a failover relationship are located at the same physical site. Both servers respond to DHCP client requests based on the load distribution ratio configured by the administrator. 14
  16. 16. The figure shows how relative minimum bandwidth works for each of the four types of network traffic flows in three different time periods: T1, T2, and T3. In this figure, the table on the left shows the configuration of the minimum amount of required bandwidth a given type of network traffic flow needs. For example, storage is configured to have at least 40 percent of the bandwidth (4 Gbps of a 10-GbE network adapter) at any time. The table on the right shows the actual amount of bandwidth each type of network traffic has in T1, T2, and T3. In this example, storage is actually sent at 5 Gbps, 4 Gbps, and 6 Gbps, respectively, in the three periods. QoS management In Windows Server 2012, you manage QoS policies and settings dynamically with Windows PowerShell. The new QoS cmdlets support both the QoS functionalities available in Windows Server 2008 R2—such as maximum bandwidth and priority tagging—and the new features available in Windows Server 2012, such as minimum bandwidth. Benefits of QoS minimum bandwidth QoS minimum bandwidth benefits vary from public cloud hosting providers to enterprises. Most hosting providers and enterprises today use a dedicated network adapter and a dedicated network for a specific type of workload such as storage or live migration to help achieve network performance isolation on a server running Hyper-V. • Although this works for those using 1-GbE network adapters, it becomes impractical for those using or planning to use 10-GbE network adapters. • Not only does one 10-GbE network adapter (or two for high availability) already 15
  17. 17. provide sufficient bandwidth for all the workloads on a server running Hyper-V in most deployments, but 10-GbE network adapters and switches are considerably more expensive than their 1-GbE counterparts. • To make the best use of 10-GbE hardware, a server running Hyper-V requires new capabilities to manage bandwidth. Benefits for public cloud hosting providers: • Host customers on a server running Hyper-V and still be able to provide a certain level of performance based on SLAs. • Help to ensure that customers won’t be affected or compromised by other customers on their shared infrastructure, which includes computing, storage, and network resources. Benefits for enterprises: • Run multiple application servers on a server running Hyper-V and be confident that each application server will deliver predictable performance, eliminating the fear of virtualization due to lack of performance predictability. Requirements Minimum QoS can be enforced through the following two methods: • The first method relies on software built into Windows Server 2012 and has no other requirements. • The second method, which is hardware assisted, requires a network adapter that supports Data Center Bridging. For hardware-enforced minimum bandwidth, you must use a network adapter that supports DCB and the miniport driver of the network adapter must implement the NDIS QoS APIs. A network adapter must support Enhanced Transmission Selection and Priority-Based Flow Control to pass the NDIS QoS logo test created for Windows Server 2012. Explicit Congestion Notification is not required for the logo. The IEEE Enhanced Transmission Selection specification includes a software protocol called Data Center Bridging Exchange (DCBX) to let a network adapter and switch exchange DCB configurations. DCBX is also not 15
  18. 18. required for the logo. Enabling QoS in Windows Server 2012, when it is running as a virtual machine, is not recommended. The minimum bandwidth enforced by the packet scheduler works best on 1-GbE or 10-GbE network adapters. 15
  19. 19. Populate the demo title depending upon which demo you plan to deliver. If you don’t plan to deliver demos, please hide this slide. Click through demos are (or will be) located at “scdemostore01demostoreWindows Server 2012WS 2012 Demo SeriesClick Thru DemosNetworking Demo environment build instructions are located here: scdemostore01demostoreWindows Server 2012WS 2012 Demo SeriesDemo Builds 16
  20. 20. With services spanning across multiple sites/multiple clouds, having the ability to have an infrastructure that enables VMs to move seamlessly across clouds is becoming more important. Additionally with multiple services sharing infrastructure, the need to isolate traffic and communication between VMs, services has never been more important. Lets walk through features that let us do it. 17
  21. 21. Before Windows Server 2012 Hyper-V Network Virtualization extends the concept of server virtualization to allow multiple virtual networks, potentially with overlapping IP addresses, to be deployed on the same physical network. With Hyper-V Network Virtualization, you can set policies that isolate traffic in your dedicated virtual network, independent of the physical infrastructure. Isolating the virtual machines of different departments or customers can be a challenge on a shared network. When these departments or customers must isolate entire networks of virtual machines, the challenge becomes even greater. Traditionally, VLANs are used to isolate networks, but VLANs are very complex to manage on a large scale. Some of the complexities are • Cumbersome reconfiguration of production switches is required whenever virtual machines or isolation boundaries must be moved, and the frequent reconfiguration of the physical network to add or modify VLANs increases the 18
  22. 22. risk of an unplanned loss of service. • VLANs have limited scalability because typical switches support only 1,000 VLAN IDs (with a maximum of 4,095). • VLANs cannot span multiple subnets, which limits the number of nodes in a single VLAN and restricts the placement of virtual machines based on physical location. In addition to the drawbacks of VLANs, virtual machine IP address assignment presents other key issues when organizations move to the cloud: • Required renumbering of service workloads. • Policies that are tied to IP addresses. • Physical locations that determine virtual machine IP addresses. • Topological dependency of virtual machine deployment and traffic isolation. The IP address is the fundamental address that is used for layer-3 network communication, because most network traffic is TCP/IP. Unfortunately, when IP addresses are moved to the cloud, the addresses must be changed to accommodate the physical and topological restrictions of the data center. Renumbering IP addresses is cumbersome because the associated policies that are based on the IP addresses must also be updated. The physical layout of a data center influences the permissible potential IP addresses for virtual machines that run on a specific server or blade server that is connected to a specific rack in the data center. A virtual machine that is provisioned and placed in the data center must adhere to the choices and restrictions regarding its IP address. 18
  23. 23. Therefore, the typical result is that data center administrators assign IP addresses to the virtual machines and force virtual machine owners to adjust their policies that were based on the original IP address. This renumbering overhead is so high that many enterprises choose to deploy only new services into the cloud and leave legacy applications unchanged. With Windows Server 2012 Hyper-V Network Virtualization solves these problems. With this feature, you can isolate network traffic from different business units or customers on a shared infrastructure and not be required to use VLANs. Hyper-V Network Virtualization also lets you move virtual machines as needed within your virtual infrastructure while preserving their virtual network assignments. Finally, you can even use Hyper-V Network Virtualization to transparently integrate these private networks into a preexisting infrastructure on another site. Hyper-V Network Virtualization extends the concept of server virtualization to allow multiple virtual networks, potentially with overlapping IP addresses, to be deployed on the same physical network. With Hyper-V Network Virtualization, you can set policies that isolate traffic in your dedicated virtual network independently of the physical infrastructure. The figure illustrates how Hyper-V Network Virtualization isolates network traffic belonging to two different customers. In it, Blue and Red virtual machines are hosted on a single physical network, or even on the same physical server. However, because they belong to separate Blue and Red virtual networks, the virtual machines can’t communicate with each other even if the customers assign them IP addresses from the 18
  24. 24. same address space. Server Virtualization is a well-understood concept that allows multiple server instances to run on a single physical host concurrently, but isolated from each other, with each server instance essentially acting as if it’s the only one running on the physical machine. Network Virtualization provides a similar capability. On the same physical network: • You can run multiple virtual network infrastructures. • You can have overlapping IP addresses. • Each virtual network infrastructure acts as if it’s the only one running on the shared physical network infrastructure. How network 18
  25. 25. virtualization works •Two IP addresses for each virtual machine. To virtualize the network with Hyper-V Network Virtualization, each virtual machine is assigned two IP addresses: o The Customer Address (CA) is the IP address that the customer assigns based on the customer’s own intranet infrastructure. This address lets the customer exchange network traffic with the virtual machine as if it had not been moved to a public or private cloud. The CA is visible to the virtual machine and reachable by the customer. o The Provider Address (PA) is the IP address that the host assigns based on the host’s physical network infrastructure. The PA appears in the packets on the wire exchanged with the virtualization server that hosts the virtual machine. The PA is visible on the physical network, but not to the virtual machine. The layer of CAs is consistent with the customer's network topology, which is virtualized and decoupled from the underlying physical network addresses, as implemented by the layer of PAs. Problems solved Network virtualization solves earlier problems by: 18
  26. 26. • Removing VLAN constraints. • Eliminating hierarchical IP address assignment for virtual machines. 18
  27. 27. Note to presenter: 7 clicks in build. Hyper-V Network Virtualization extends the concept of server virtualization to allow multiple virtual networks, potentially with overlapping IP addresses, to be deployed on the same physical network. With Hyper-V Network Virtualization, you can set policies that isolate traffic in your dedicated virtual network, independent of the physical infrastructure. This diagram illustrates how you can use Hyper-V Network Virtualization to isolate network traffic belonging to two different customers. In the figure, Blue and Red virtual machines are hosted on a single physical network, or even on the same physical server. However, because they belong to separate virtual networks, the Blue Network and the Red Network, the virtual machines can’t communicate with each other even if the customers assign them IP addresses from the same address space. Highlights: • Location-independent addressing by virtualizing the IP address. • Creation of virtual layer-2/layer-3 topologies over any physical network that supports bidirectional IP connectivity. • A physical network that can be a hierarchical three-tier network, a full bi-section bandwidth Clos network, or a large layer-2 network. • Virtual networks that can span multiple physical subnets and multiple sites. 19
  28. 28. Example (see figure) In this scenario, Contoso Ltd. is a service provider that provides cloud services to businesses that need them. Blue Corp and Red Corp are two companies that want to move their Microsoft SQL Server infrastructures into the Contoso cloud, but they want to maintain their current IP addressing. With the new network virtualization feature of Hyper-V in Windows Server 2012, Contoso can do this, as shown in the figure. 8/29/2012 Page 20
  29. 29. Note to speaker: 3 clicks to build. Multitenant security and isolation using the Hyper-V Extensible Switch is accomplished with private virtual LANs (PVLANs) (this slide) and other tools (next slide). Virtual machine isolation with PVLANs. VLAN technology is traditionally used to subdivide a network and provide isolation for individual groups that share a single physical infrastructure. Windows Server 2012 introduces support for PVLANs, a technique used with VLANs that can be used to provide isolation between two virtual machines on the same VLAN. When a virtual machine doesn’t need to communicate with other virtual machines, you can use PVLANs to isolate it from other virtual machines in your data center. By assigning each virtual machine in a PVLAN one primary VLAN ID and one or more secondary VLAN IDs, you can put the secondary PVLANs into one of three modes (as shown in the following table). These PVLAN modes determine which other virtual machines on the PVLAN a virtual machine can talk to. If you want to isolate a virtual machine, put it in isolated mode. The figure shows how the three PVLAN modes can be used to isolate virtual machines 21
  30. 30. that share a primary VLAN ID. In this example the primary VLAN ID is 2, and the two secondary VLAN IDs are 4 and 5. You can put the secondary PVLANs into one of three modes: • Isolated. Isolated ports cannot exchange packets with each other at layer 2. If fact, isolated ports can only talk to promiscuous ports. • Community. Community ports on the same VLAN ID can exchange packets with each other at layer 2. They can also talk to promiscuous ports. They cannot talk to isolated ports. • Promiscuous. Promiscuous ports can exchange packets with any other port on the same primary VLAN ID (secondary VLAN ID makes no difference). 21
  31. 31. Note to presenter: 3 clicks to build. Windows Server 2012 provides a highly cloud-optimized operating system. VPN site-to-site functionality in remote access provides cross-premises connectivity between enterprises and hosting service providers. Cross-premises connectivity enables enterprises to connect to private subnets in a hosted cloud network. It also enables connectivity between geographically separate enterprise locations. With cross-premises connectivity, enterprises can use their existing networking equipment to connect to hosting providers using the industry standard IKEv2-IPsec protocol. In the example on this slide, the following occurs: 1. Contoso.com and Woodgrove.com offload some of their enterprise infrastructure in a hosted cloud. 2. The hosting provider provides private clouds for each organization. 3. In the hosted cloud, virtual machines running Windows Server 2012 are configured as remote access servers running site-to-site VPN. 4. In each hosted private cloud, a cluster of two or more remote access servers is deployed to provide high availability and failover. 5. Contoso.com has two branch office locations. In each location, a Windows Server 2012 remote access server is deployed to provide a cross-premises connectivity solution to the hosted cloud and between the branch offices. 6. The unified Remote Access Server role in Windows Server 2012 running at the contoso.com branch offices is also configured as DirectAccess Servers in a multisite deployment. Now, the DirectAccess client can securely access any resource in the Contoso public cloud or Contoso branch offices from any location in the Internet. 7. Woodgrove.com can use existing routers to connect to the hosted cloud because cross-premises functionality in Windows Windows 8 complies with IKEv2 and IPsec standards. 22
  32. 32. Populate the demo title depending upon which demo you plan to deliver. If you don’t plan to deliver demos, please hide this slide. Click through demos are (or will be) located at “scdemostore01demostoreWindows Server 2012WS 2012 Demo SeriesClick Thru DemosNetworking Demo environment build instructions are located here: scdemostore01demostoreWindows Server 2012WS 2012 Demo SeriesDemo Builds 23
  33. 33. Customers want to get the best performance out of the hardware they have – whether they are industry standard hardware or high end hardware that they have already invested in. Poor network performance are primarily because of two reasons – limitations in network bandwidth, limitations in the processing power. We have done considerable amount of work to extract great and predictable performance out of inbox adapters that are shipped as a part of today’s servers and make the most out of some of the next generation hardware we are beginning to see in the market. 24
  34. 34. Note to presenter: 3 clicks to build. 25
  35. 35. The figure shows the architecture of SR-IOV support in Hyper-V. Support for SR-IOV networking devices Single Root I/O Virtualization (SR-IOV) is a standard introduced by the PCI-SIG, the special- interest group that owns and manages PCI specifications as open industry standards. SR-IOV works in conjunction with system chipset support for virtualization technologies that provide remapping of interrupts and Direct Memory Access, and allows SR-IOV-capable devices to be assigned directly to a virtual machine. Hyper-V in Windows Server 2012 enables support for SR-IOV-capable network devices and allows an SR-IOV virtual function of a physical network adapter to be assigned directly to a virtual machine. This increases network throughput and reduces network latency while also reducing the host CPU overhead required for processing network traffic. Benefits These new Hyper-V features let enterprises take full advantage of the largest available host systems to deploy mission-critical, tier-1 business applications with large, demanding workloads. You can configure your systems to maximize the use of host system processors and memory to 8/29/2012 Page 26
  36. 36. effectively handle the most demanding workloads. Requirements To take advantage of the new Hyper-V features for host scale and scale-up workload support, you need the following: • One or more Windows Server 2012 installations with the Hyper-V role installed. Hyper-V requires a server that provides processor support for hardware virtualization. • The number of virtual processors that may be configured in a virtual machine depends on the number of processors on the physical machine. You must have at least as many logical processors in the virtualization host as the number of virtual processors required in the virtual machine. For example, to configure a virtual machine with the maximum of 32 virtual processors, you must be running Hyper-V in Windows Server 2012 on a virtualization host that has 32 or more logical processors. SR-IOV networking requires the following: • A host system that supports SR-IOV (such as Intel VT-d2), including chipset support for interrupt and DMA remapping and proper firmware support to enable and describe the platform’s SR-IOV capabilities to the operating system. • An SR-IOV–capable network adapter and driver in both the management operating system (which runs the Hyper-V role) and each virtual machine where a virtual function is assigned. 26
  37. 37. Note to presenter: 3 clicks to build. RSS improvements. RSS spreads monitoring interrupts over multiple processors, so that a single processor isn’t required to handle all I/O interrupts, which was common with earlier versions of Windows Server. Active load balancing between the processors tracks the load on the different CPUs and then transfers the interrupts as needed. You can select which processors will be used for handling RSS requests. RSS works with in-box NIC Teaming or Load Balancing and Failover (LBFO) to address an issue in previous versions of Windows Server, where a choice had to be made between using hardware drivers or RSS. RSS will also work for User Datagram Protocol (UDP) traffic and can manage and debug applications that use Windows Management Instrumentation (WMI) and Windows PowerShell. 27
  38. 38. RSC improvements. RSC improves the scalability of the servers by reducing the overhead for processing a large amount of network I/O traffic by offloading some of the work to network adapters. In early testing, RSC has reduced CPU usage by up to 20 percent. The way it does this is by holding network packets in the NIC and coalescing them and sending a larger packet to the CPU so that the CPU processes a bigger buffer in one go as opposed to multiple smaller packets each time. The size of each of the packets depend on the network connection between the client and the server and the speed of the client and the server network adapters. By coalescing packets at the NIC, we improve throughput considerably. 28
  39. 39. Note to presenter: Let the third animation run until all the dots stop moving and until the dots can be seen on the screen. Dynamic Virtual Machine Queues (D-VMQs) • Windows Server 2008 R2: Offload routing and filtering of network packets to the network adapter (enabled by hardware-based receive queues) to reduce host overhead. • New in Windows Server 2012: Dynamically distribute incoming network traffic processing to host processors (based on processor usage and network load). 29
  40. 40. Populate the demo title depending upon which demo you plan to deliver. If you don’t plan to deliver demos, please hide this slide. Click through demos are (or will be) located at “scdemostore01demostoreWindows Server 2012WS 2012 Demo SeriesClick Thru DemosNetworking Demo environment build instructions are located here: scdemostore01demostoreWindows Server 2012WS 2012 Demo SeriesDemo Builds 30
  41. 41. Manageability is one of the most important challenges customers face – from having the ability to automate regular tasks to having the control over the entire IP address infrastructure, no matter what the size of your organization is, having the ability to get the best performance on a multi-site environment and finally providing enterprises and hosting providers the ability to track resource usage and build chargeback/show-back solutions. 31
  42. 42. Windows Server 2012 introduces IP Address Management (IPAM), a framework for discovering, monitoring, auditing, and managing the IP address space and the associated infrastructure servers on a corporate network. IPAM gives you a choice of two main architectures: Distributed, where an IPAM server is deployed at every site in an enterprise. This mode of deployment is largely preferred to reduce network latency in managing infrastructure servers from a centralized IPAM server. Centralized, where one IPAM server is deployed in an enterprise. This will be deployed even in case of the distributed mode. This way administrators would have one single console to visualize, monitor, and manage the entire IP address space of the network and also the associated infrastructure servers. An example of the distributed IPAM deployment method is shown in this figure, with one IPAM server located at the corporate headquarters and others at each branch office. There is no communication or database sharing between different IPAM servers in the enterprise. If multiple IPAM servers are deployed, you can customize the scope of discovery for each IPAM server or filter the list of managed servers. A single IPAM server might manage a specific domain or location, perhaps with a second IPAM server configured as a backup. 32
  43. 43. IPAM monitoring IPAM periodically attempts to locate the domain controller, DNS, and DHCP servers on the network that are within the scope of discovery that you specify and allow manual addition of Network Policy Server (NPS). You must choose whether these servers are managed by IPAM or unmanaged. To be managed by IPAM, server security settings and firewall ports must be configured to allow the IPAM server access to perform the required monitoring and configuration functions. You can choose to manually configure these settings or use Group Policy objects (GPOs) to configure them automatically. If you choose the automatic method, settings are applied when a server is marked as managed, and settings are removed when it is marked as unmanaged. The IPAM server communicates with managed servers by using a remote procedure call (RPC) or WMI interface, as shown here. IPAM monitors domain controllers and servers running NPS for IP address tracking purposes. In addition to monitoring functions, several DHCP server and scope properties can be configured by using IPAM. Zone status monitoring and a limited set of configuration functions are also available for DNS servers. IPAM supports Active Directory–based auto-discovery of DNS and DHCP servers on the network. Discovery is based on the domains and server roles selected during configuration of the scope of discovery. IPAM discovers the domain controller, DNS servers, and DHCP servers in the network and confirms their availability based on role-specific protocol transactions. In addition to automatic discovery, IPAM also supports the manual addition of a server to the list of servers in the IPAM system. Managed servers Configuring the manageability status of a server as Managed indicates that it is part of the IPAM server’s managed environment. Data is retrieved from managed servers to display in various IPAM views. The type of data that is gathered depends on the server role. Unmanaged servers Configuring the manageability status of a server as Unmanaged indicates that the server is considered to be outside the IPAM server’s managed environment. No data is collected by IPAM from these servers. IPAM data collection tasks IPAM schedules the following tasks to retrieve data from managed servers to 32
  44. 44. populate the IPAM views for monitoring and management. You can also modify these tasks by using Task Scheduler. • Server Discovery. Automatically discovers domain controllers, DHCP servers, and DNS servers in the domains that you select. • Server Configuration. Collects configuration information from DHCP and DNS servers for display in IP address space and server management functions. • Address Use. Collects IP address space use data from DHCP servers for display of current and historical use. • Event Collection. Collects DHCP and IPAM server operational events. Also collects events from domain controllers, NPS, and DHCP servers for IP address tracking. • Server Availability. Collects service status information from DHCP and DNS servers. • Service Monitoring. Collects DNS zone status events from DNS servers. • Address Expiry. Tracks IP address expiry state and logs notifications. 32
  45. 45. Hyper-V in Windows Server 2012 helps providers build a multitenant environment in which virtual machines can be served to multiple clients in a more isolated and secure way, as shown in the figure. Because a single client may have many virtual machines, aggregation of resource use data can be a challenging task. However, Windows Server 2012 simplifies this task by using resource pools, a feature available in Hyper-V. Resource pools are logical containers that collect the resources of the virtual machines that belong to one client, permitting single-point querying of the client’s overall resource use. Hyper-V Resource Metering has the following features: • It uses resource pools, logical containers that collect resources of the virtual machines that belong to one client and allow single-point querying of the client’s overall resource use. • It works with all Hyper-V operations. • It helps ensure that movement of virtual machines between Hyper-V hosts (such as through live, offline, or storage migration) doesn’t affect the collected data. • It uses Network Metering Port ACLs to differentiate between Internet and intranet traffic, so that providers can measure incoming and outgoing network traffic for a given IP address range. Resource Metering can measure the following: 33
  46. 46. • The average CPU, in megahertz, used by a virtual machine over a period of time. • The average physical memory, in megabytes, used by a virtual machine over a period of time. • The lowest amount of physical memory, in megabytes, assigned to a virtual machine over a period of time. • The highest amount of physical memory, in megabytes, assigned to a virtual machine over a period of time. • The highest amount of disk space capacity, in megabytes, allocated to a virtual machine over a period of time. • The total incoming network traffic, in megabytes, for a virtual network adapter over a period of time. • The total outgoing network traffic, in megabytes, for a virtual network adapter over a period of time. 33
  47. 47. The recent trend of moving content servers to off-premises locations means that these servers frequently must deliver content to remote users over WAN connections. The additional pressure on WAN connections can increase networking costs and reduce productivity by slowing content delivery speeds. BranchCache provides a solution for these IT and business needs. BranchCache enables content from file and Web servers on a WAN to be cached on computers at a local branch office. BranchCache can improve application response time and reduce WAN traffic. Cached content either can be distributed across peer client computers (distributed cache mode) or centrally hosted on a server (hosted cache mode). First introduced in Windows 7 and Windows Server 2008 R2, improvements in Windows 8 and Windows Server 2012 include streamlined deployment process, the ability to scale to larger offices, and an improved ability to optimize bandwidth over WAN connections between content servers enabled with BranchCache and remote client computers. This functionality lets remote client computers access data in a secure, efficient, and scalable way. Deployment of multiple hosted cache servers. Windows Server 2012 provides the ability to scale hosted cache–mode deployments for offices of any size by allowing you to deploy as many hosted cache servers as are needed at a location. BranchCache no longer requires office-by-office configuration. Deployment is streamlined because there’s no requirement for a separate Group Policy Object (GPO) for each location. Only a single GPO that contains a small group of settings is required 34
  48. 48. to deploy BranchCache in an organization of any size, from a small business to a large enterprise. Client computer configuration is automatic. Clients can be configured through Group Policy as distributed cache–mode clients by default. However, they search for a hosted cache server, and if one is discovered, clients automatically self-configure as hosted cache–mode clients. Cache data is kept encrypted, and hosted cache servers do not require server certificates. BranchCache security includes improved data encryption, and provides data security without requiring a public key infrastructure or additional drive encryption. BranchCache provides tools to manipulate data and preload the content at remote locations. Now you can push content to branch offices so that it’s immediately available when the first user requests it. This capability allows you to distribute content during periods of low WAN use. BranchCache is now manageable with Microsoft Windows PowerShell™ and Windows Management Instrumentation. This capability enables scripting and remote management of BranchCache content servers, hosted cache servers, and client computers. 34
  49. 49. Self explanatory slide. Every operation is now available through PowerShell. Infrastructure to build solid automation to take care of regular tasks so that IT Pros can use their time to contribute on what is more important to their career and business. Whether you manage a single server or an entire datacenter, PowerShell is your friend. Every feature we saw over the last hour can be controlled with PowerShell. 35
  50. 50. Populate the demo title depending upon which demo you plan to deliver. If you don’t plan to deliver demos, please hide this slide. Click through demos are (or will be) located at “scdemostore01demostoreWindows Server 2012WS 2012 Demo SeriesClick Thru DemosNetworking Demo environment build instructions are located here: scdemostore01demostoreWindows Server 2012WS 2012 Demo SeriesDemo Builds 36
  51. 51. We have been working with partners in the industry to bring in a great variety of choices for customers and in addition provide a platform that is second to none, so that customers and partners have the infrastructure to build customized solutions on top of Windows. Windows has always been about providing users a variety of choices and WS 2012 is no exception. Lets walk though some of the features and work we have done with our partners. 37
  52. 52. Windows Server 2012 provides improved multitenant security for customers on a shared infrastructure as a service (IaaS) cloud through the new Hyper-V Extensible Switch. The Hyper-V Extensible Switch is a layer-2 virtual interface that provides programmatically managed and extensible capabilities to connect virtual machines to the physical network. Management features are built into the Hyper-V Extensible Switch that allow you to troubleshoot and resolve problems on Hyper-V Extensible Switch networks: Windows PowerShell and scripting support. Windows Server 2012 provides Windows PowerShell cmdlets for the Hyper-V Extensible Switch that allow you to build command-line tools or automated scripts for setup, configuration, monitoring, and troubleshooting. Windows PowerShell also enables third parties to build their own tools to manage the virtual switch. Unified tracing and enhanced diagnostics. Unified tracing has been extended into the Hyper-V Extensible Switch to allow you to trace packets and events through the Hyper-V Extensible Switch and its extensions. 38
  53. 53. Open platform to fuel plug-ins. The Hyper-V Extensible Switch is an open platform that allows plug-ins to sit in the virtual switch between all traffic, including virtual machine-to-virtual machine traffic. Extensions can provide traffic monitoring, firewall filters, and switch forwarding. To jump start the ecosystem, several partners will announce extensions with the unveiling of the Hyper-V Extensible Switch; no “one switch only” solution for Hyper-V. Core services are free. Core services are provided for extensions; for example, all extensions have live migration support by default; no special coding for services is required. Windows Reliability/Quality. Extensions experience a high level of reliability and quality from the strength of the Windows platform and the Windows Logo Program Certification, which sets a high bar for extension quality. Unified management. The management of extensions is integrated into the Windows management through Windows PowerShell cmdlets and WMI scripting. One management story for all. Easier to support. Unified tracing means its quicker and easier to diagnose issues when they arise. Less down time increases availability of services. 39
  54. 54. Over the last couple of years we have been working extensively with a lot of hardware partners. The companies listed here should give you a good idea about the breadth of partners we are working with for various features across networking 40
  55. 55. Populate the demo title depending upon which demo you plan to deliver. If you don’t plan to deliver demos, please hide this slide. Click through demos are (or will be) located at “scdemostore01demostoreWindows Server 2012WS 2012 Demo SeriesClick Thru DemosNetworking Demo environment build instructions are located here: scdemostore01demostoreWindows Server 2012WS 2012 Demo SeriesDemo Builds 41
  56. 56. 42
  57. 57. If you consider the challenges and needs we saw in the previous slide, you can split those into five different scenarios. Delivering continuously available services – ability to make the application resilient to underlying hardware failure and maintaining high SLAs. Reduce the complexity of network infrastructure and helping migration across clouds and IP portability easier. Having the ability to improve network and system performance with industry standard hardware and being ready for next wave of hardware improvements. Finally having a great management layer that lets you manage both your system and network infrastructure with the simplicity of managing just a single server. Finally we will walk through work we have done with our partner ecosystem to enable a diverse set of choices to better control your network infrastructure. 43
  58. 58. 44
  59. 59. 45 8/29/2012
  60. 60. 8/29/2012 46 Windows Server Management Marketing
  61. 61. 47

×