Our job is to take that transformative journey with you – to help you take advantage of new innovations that can give you the answersAcross all these areas – from infra, to data, to apps and devices – you need to transform IT to deliver to the business services that enable business value You need to deliver a consistent and GREAT user experience from anywhere – no matter the deviceOn the app front – you need to enable modern, agile apps – to extend app functionality to address new app and social patterns – and get the app dev backlog addressed On the data front – it’s about big data, small data, all data. You must make it easier for everyone to access and perform analytics on any data, any size, from anywhere – and on data wherever it lives – so the many, not the few, can uncover insights Transform your infrastructure from managing server by server to datacenter scale – and deliver with that: on-demand scale and real-time performanceInfra that’s designed to withstand failure - not just recover from it after the factResources managed at datacenter scaleMicrosoft’s solution to enabling this is the Cloud OS - our vision of the unified platform built for modern business.Let’s talk a bit more about what makes Microsoft’s Cloud OS vision different
<one click per challenge and solution> One of the key things that all of us have got to do and some of our core responsibilities, most of you in this room are responsible for infrastructure in your organizations and you have a service level, you've got budgets, and the demands that are being asked of you are continually escalating, they're going up. Your budgets usually are staying about the same. But you're being asked to deliver this elastic, always-available, highly scalable fabric across all your customers' needs.If you are an ITPro in an enterprise, you are really trying to get ahead of some of the mundane tasks that are simply keeping the lights on in your datacenter, and instead focus on impactful projects that can help your organization achieve a substantial competitive advantage in the market, while being a great partner to your applications owners.If you are an ITPro in a service provider organization, you are likely mostly concerned about how to differentiate your offerings so you can better compete with the Googles and Amazons of the world. Being able to quickly offer new, value-add application services while keeping laser focus on your costs will make your business more successful.Whether you are a professional services provider company or providing professional IT services within your company, there are a few things that you will want to expect from a modern datacenter and cloud solution; you want to be able to:- Lower infrastructure cost and increase operational efficiencies-Automate repeatable tasks to focus on strategic projects- Benefit from a high level of cross-platform interoperability-Build and deploy modern, self-service and highly available applications that can span datacenters- Unify your device environment and protect your corporate information so you can empower your users
We start as Service Manager as the repository for our CMDB and then Orchestrator which provides us our automation engine. We have a bi-directional connector those automation activities to come into Service Manager, as well as for Service Manager to issue and execute those automation workflows within Orchestrator. We then have our other external repositories, either System Center related,Line of Business application such as Microsoft Exchange (User and Admin which is enabled with R2) or 3rd Party management tools and inbound connectors to pull in configuration items and automation data to be populated into our CMDB and be reconciled together so that we’re looking at single record for a piece of our infrastructure even though some pieces came from Virtual Machine Manager, Operations or Configuration Manager or even Active Directory. Once we have our reconciled view of data within our infrastructure we can then do something with that. We have within the System Center suite a bi-directional interface through Orchestrator to issue automation commands to System Center products or 3rd party tools or Line of business application if you build your own integration pack using the SDK to actually drive automation within those tools, to respond to errors to deliver changes to manage changes with your infrastructure. Within R2, we’ve added integration packs to enable Azure cloud management. And then lastly part of doing all this work we have to do two things: keep people aware and what’s happening and we got to be able to report on it. And we provide that inbound and outbound notifications capability through Service Manager and Orchestrator to Exchange as well as to our Service Manager data warehouse for dash boarding and reporting capabilities.
Now before we deploy a new Hyper-V-enabled image to a blank server, there’s a little bit of work we may want to do to streamline that process and ensure we’re sending the correct image, and configuration down to the host in question.Before initiating the actual deployment of the image, VMM will initiate a collection of information about the target host, and allow the IT administrator to make a few selections and configuration choices, before VMM automates the remainder of the tasks.<click>The first key task that VMM executes, is an OOB reboot, or wake up. For this to work, VMM will need the IP address of the Baseboard Management Controller of the target host. Admins can provide the IP specifically, or scan a range, but once located VMM will initiate the wake up, or reboot depending on the current state of that target host.<click>VMM will then orchestrate the target host to PXE boot and attach to the WDS server that we talked about earlier. VMM and the WDS server work in harmony to firstly <click> authorize that host to PXE boot and attach to the WDS server, and then <click> for the WDS server to deliver a VMM-specific WinPE image down to the target host. It’s important to note that not just any physical server, or desktop, PXE booting off the WDS server, will receive this WinPE image – VMM has provided the WDS server with specific information on what to send over to the host located at the IP/MAC address specified earlier in the process.<click>Once WinPE is loaded, a number of pre-defined scripts will execute automatically, triggering a collection of information about network adaptors, and disks, and this information <click> is sent back to VMM and presented in the wizard, so the user can continue on, knowing specific information about the networking and storage configuration of that target host, and they can now configure the deployment in respect of that information.<next slide>
Once the administrator has all the information from deep discovery, the admin continues the wizard, provides a computer name, configures networks, local storage options etc, and chooses a physical computer profile. The physical computer profile is a set of configuration options that are used by VMM to standardize deployment of new hosts into the infrastructure. These host profiles will have a number of configuration options, that relate to network, storage, drivers, naming, but most importantly, the physical computer profile has a Hyper-V image, contained within a VHD or VHDX file, assigned to it. These Physical Computer Profiles, drivers and VHDX files are stored in VMM’s library.So, when the admin finally finishes the wizard, VMM starts the deployment process. The host will have been shut down after deep discovery, <click> so VMM will first wake up that host, using the BMC. <click> It will coordinate the host to boot from PXE and again, <click> work in conjunction with the WDS server to allow this particular host to PXE boot and connect to the WDS server itself. <click> Once connected, the host will download a WinPE image, and <click> begin executing the custom scripts and partition configuration.Once this stage has completed, <click> VMM will push down the VHD/VHDX file to the host. Note, this is configuring the host for a boot from VHD configuration, rather than a traditional Windows Server install. VMM pushes the image down from the library, onto the host’s newly partitioned hard drive. <click> Once complete, VMM will inject drivers <click>, run the customization wizards and join the host to the domain, enable Hyper-V if required, and whilst doing so, brings the new host into VMM’s complete management control and finishes off the process with any post-install scripts that it needs itself, or that have been added by the administrator.The admin is left with a new Hyper-V host, which can now accept virtual machines.<next slide>
Virtual Fibre Channel in Hyper-VMany enterprises have already invested in Fibre Channel SANs, deploying them within their datacenters to address growing storage requirements. These customers often want the ability to utilize this storage from within their virtual machines instead of having the storage accessible to and used only by the Hyper-V host. In addition, customers are looking to achieve true SAN line speed from the VMs, to the SAN.Unmediated SAN AccessVirtual Fibre Channel for Hyper-V provides the guest operating system with unmediated access to a SAN by using a standard World Wide Name (WWN) that is associated with a virtual machine. Hyper-V lets you use Fibre Channel SANs to virtualize workloads that require direct access to SAN logical unit numbers (LUNs). Fibre Channel SANs also let you operate in new scenarios, such as running the Windows Failover Clustering feature inside the guest operating system of a virtual machine connected to shared Fibre Channel storage.A Hardware-Based I/O Path to the Windows Software Virtual Hard Disk StackMid-range and high-end storage arrays include advanced storage functionality that helps offload certain management tasks from the hosts to the SANs. Virtual Fibre Channel offers an alternative, hardware-based I/O path to the Windows software virtual hard disk stack. This path lets you use the advanced functionality of your SANs directly from within Hyper-V virtual machines. For example, Hyper-V users can offload storage functionality (such as taking a snapshot of a LUN) to the SAN hardware simply by using a hardware Volume Shadow Copy Service (VSS) provider from within a Hyper-V virtual machineLive Migration SupportTo support live migration of virtual machines across Hyper-V hosts while maintaining Fibre Channel connectivity, two WWNs, Set A and Set B, are configured for each virtual Fibre Channel adapter. Hyper-V automatically alternates between the Set A and Set B WWN addresses during live migration. This helps ensure that all LUNs are available on the destination host before the migration and that no downtime occurs during the migration. The live migration process that maintains Fibre Channel connectivity is illustrated on the slide.N_Port ID Virtualization (NPIV)NPIV is a Fibre Channel facility that lets multiple N_Port IDs share a single physical N_Port. This lets multiple Fibre Channel initiators occupy a single physical port, easing hardware requirements in SAN design, especially where virtual SANs are called for. Virtual Fibre Channel for Hyper-V guests uses NPIV (T11 standard) to create multiple NPIV ports on top of the host’s physical Fibre Channel ports. A new NPIV port is created on the host each time a virtual HBA is created inside a virtual machine. When the virtual machine stops running on the host, the NPIV port is removed.Flexible Host to SAN ConnectivityHyper-V lets you define virtual SANs on the host to accommodate scenarios in which a single Hyper-V host is connected to different SANs via multiple Fibre Channel ports. A virtual SAN defines a named group of physical Fibre Channel ports that are connected to the same physical SAN. For example, assume that a Hyper-V host is connected to two SANs—a production SAN and a test SAN. The host is connected to each SAN through two physical Fibre Channel ports. In this example, you might configure two virtual SANs—one named “Production SAN” that has the two physical Fibre Channel ports connected to the production SAN and one named “Test SAN” that has two physical Fibre Channel ports connected to the test SAN. You can use the same technique to name two separate paths to a single storage target.4 vFC Adapters per VMYou can configure as many as four virtual Fibre Channel adapters on a virtual machine, and associate each one with a virtual SAN. Each virtual Fibre Channel adapter is associated with one WWN address, or two WWN addresses to support live migration. Each WWN address can be set automatically or manually.Multipath I/O (MPIO)Hyper-V in Windows Server 2012 R2 uses Multipath I/O (MPIO) functionality to help ensure optimal connectivity to Fibre Channel storage from within a virtual machine. You can use MPIO functionality with Fibre Channel in the following ways:Virtualize workloads that use MPIO. Install multiple Fibre Channel ports in a virtual machine, and use MPIO to provide highly available connectivity to the LUNs that the host can access.Configure multiple virtual Fibre Channel adapters inside a virtual machine, and use a separate copy of MPIO within the guest operating system of the virtual machine to connect to the LUNs that the virtual machine can access. This configuration can coexist with a host MPIO setup.Use different device specific modules (DSMs) for the host or each virtual machine. This approach permits migration of the virtual machine configuration, including the configuration of DSM and connectivity between hosts and compatibility with existing server configurations and DSMs.
Enterprise application and IT workloads are no longer restricted to the four walls of the corporate datacenter. Increasingly, corporations are looking to move development, test and production workloads to hosted and public clouds in order to achieve flexibility, agility and reduce costs as they trade capital expenditure invested in hardware for operational expenditure with service providers or public clouds subscriptions, paying only for what they use.Initially, Infrastructure-as-a-Service (IaaS) is the service most appealing to enterprise customers for it’s straight-forward understandability, ease and speed of deployment and lack of lock-in.Service providers want a free entry-level offering to acquire customers and then a mechanism for easy up-sell to higher margin offerings.And finally customization, integration, and branding are essential.
We’re going to take a look at how enterprises and service providers can offer a consistent experience in this section—but I want to start with how Windows Azure works.Windows Azure subscribers--let’s call them customers—access the public cloud through a website, known as the management or customer portal. Basically, this portal is the gateway to a wide range of IT services that are delivered on top of the compute, storage, and network resources found in Microsoft datacenters around the world. Now, at each of these datacenters, there are Microsoft IT administrators that manage resources, allocate those resources to the various services being provided, and manage customer subscriptions. In addition, they bill customers for the services consumed.For the customer, everything is taken care of, so they get the services they need almost instantly. For instance, a developer could provision a test environment in minutes—a far shorter time than many face in their enterprise environments.
Now let’s take a look at how this translates to an enterprise or service provider dataceter. You can see it looks exactly the same. The only difference is that its on-premise, rather than in the cloud.Who are the customers? Well, if you’re a service provider, they’re the customers who pay you to provide IT services. If you’re an enterprise, they’re the employees who consume IT services.Within your datacenter, your administrator performs the exact same functions as in the Microsoft datacenters that Windows Azure uses. He or she configures and defines the resources that support your customers and manages access to services. Admins can also monitor services consumed, so that service providers can price and bill, and enterprises can charge users, departments, or divisions.
Here’s how: The Windows Azure Pack.Windows Azure Pack for Windows Server is a collection of Windows Azure technologies, available to Microsoft customers at no additional cost for installation into your data center. It runs on top of Windows Server 2012 R2 and System Center 2012 R2 and, through the use of the Windows Azure technologies, enables you to offer a rich, self-service, multi-tenant cloud, consistent with the public Windows Azure experience.That’s the long version. Here’s the short one. WAP is a free download that puts Azure in your datacenter.
This is the Management Portal for Tenants and has a strong consistency with the Windows Azure Developer portal. Tenant users can list items, view their status and provision new items.(Compare to Azure with image)
Enterprise-classThe Windows Azure Pack is built on the foundation of Windows Server and System Center —trusted by enterprises the world over, responsible for delivering computing power, virtualization, and management to support critical application workloads. Windows Azure consistency in both end user experience and services ensures that IT administrators can reuse their skills and automation across the Cloud OS destinations and move workloads that utilize the common set of services offered across the Cloud OS. The Web Sites service provides a consistent, scalable, reliable application platform for running websites and web applications.Builds on a familiar foundation of Windows Server and System Center.Isolated virtual networks for multi-tenant workloads.Extensibility and integration.Windows Azure code running in your datacenter.Highly scalable virtualization and management platform.Simple and cost-effectiveThe multi-tenant infrastructure of the Windows Azure Pack enables efficient, shared usage of commodity computing, storage, and network resources. Load balancing for web applications and virtual machine roles enables you to directly control the scale-out resources required by their application workload. Out-of-box capabilities enable the Windows Azure Pack to help provide a ready-built Web PaaS and IaaS solution for enterprises and service providers to offer self-service provisioning and management of IT services.Utilizing the advanced features in Windows Server and System Center, you can build the solution on inexpensive, industry-standard hardware.Open and interoperableThe Windows Azure Pack provides a wide range of customization and integration possibilities. The Management portal can be branded or completely replaced utilizing the Service Management API. Billing can be integrated through the supplied API.The Web Sites service supports popular web application platforms including ASP.NET, Node.js, and PHP. In addition, the Web Sites service supports popular development tools and integrates directly with source control systems including GitHub, Bitbucket, DropBox, and Team Foundation Server.Easy VM and Web application portability.Private, hosted and public cloud.Broad application platform support including .NET, node.js, PHP.OData REST API for portal level integration.Service Bus for asynchronous distributed application integration.
Manage your enterprise with System Center
Requirement to provide scalable and
Always-on expectations of the
IT budget pressure even with
Volume of Web and cloud
applications continues to rise.
Complex IT environments that are
tough to manage.
Evolution of applications to hybrid
cloud deployment models.
Simple and costeffective
Backup & Disaster
on, Backup to Cloud
Hypervisor, ServerAPP-V, Cloud Mgmt,
Self Service, IaaS
OS / Software
Deploy, Patching and
Settings Mgmt, 3rd
party OS, Antivirus,
Empower people to be
anywhere on whatever
device they choose
Reduce costs by
unifying IT management
Improve IT effectiveness
Provision from the admin console
Most capabilities as on-prem. Except:
OSD and task sequences
Full BranchCache support
Software Updates from Microsoft Update
In console content monitoring
Ability to monitor storage and traffic out usage
What's New in Windows Intune: http://technet.microsoft.com/library/hh452635.aspx?ITPID=technet
Push Software Distribution
Software Update Managements
Windows 8 Apps
Windows 8 Apps in the Windows Store
Pull Software Distribution
*Intel® System on Chip (SoC)
Management of Hyper-V
Supports up to 1,000 Hyper-V hosts &
25,000 virtual machines per VMM Server
Supports Hyper-V hosts in trusted &
untrusted domains, disjointed
namespace & perimeter networks
Supports Hyper-V from 2008 R2 SP1
through to 2012 R2
VMM can automatically transform a
physical x64 Windows Server into a HyperV host
Integrates with Baseboard Management
Controllers to deploy Hyper-V to bare
metal physical servers
Deep Discovery Prior to
Through integration with the BMC, VMM can
wake a physical server & collect information to
determine appropriate deployment
1. OOB Reboot
2. Boot from PXE
3. Authorize PXE boot
4. Download VMM customized WinPE
5. Execute a set of calls in WinPE to collect
hardware inventory data (network adapters
6. Send hardware data back to VMM
Centralized, Automated Bare
Metal Hyper-V Deployment
Post-deep discovery, VMM will deploy a
Hyper-V image to the physical server
1. OOB Reboot
2. Boot from PXE
3. Authorize PXE boot
4. Download VMM customized WinPE
5. Run generic command execution scripts
and configure partitions
6. Download VHD & Inject Drivers
The host is then domain joined, added to
VMM Management & post-install scripts
Access Fibre Channel SAN
data from a virtual machine
• Unmediated access to a storage area
• Hardware-based I/O path to virtual hard
• Single Hyper-V host connected to different
• Up to four Virtual Fibre Channel adapters
on a virtual machine
• Multipath I/O (MPIO) functionality
• Supports Live migration
• Now managed by System Center Virtual
Machine Manager 2012 R2
Rich Partner Ecosystem Adds
Value through Network
Integration with software and hardware Load
Balancers through hardware provider
F5 BIG-IP, Brocade Server, Iron ADX, Citrix
NetScaler, In-box Microsoft NLB
VMM integrates with Switch Extensions to
manage and deploy to Hyper-V hosts
Cisco Nexus 1000v, inMon sFlow, 5nine,
VMM integrates with in-box and Partner
gateways, to allow VMs on virtualized
networks to communicate externally.
In-box, Iron Networks, F5, Huawei
What is in SCOM 2012
Device & Server
Easy to scale out
Application Monitoring (AVIcode)
Network Devices Supported for Discovery by Operations Manager:
Every 15 minutes
with offsite replication and tape
Hyper-V Over CSV
900% backup performance improvement
Protection - VM
Uninterrupted data protection upon VM live migration
DPM Backup to
Ability to take backup to Azure Service
Dedup File System
Efficient Data protection of Dedup file system volume
• Efficient over the wire
• Efficient on DPM storage
SQL 2012 “Always
ON” DB protection
DPM can now protect SQL 2012 “Always ON” databases
Standalone to Standalone
Cluster to Standalone and vice versa
Every 15 minutes
with offsite replication and tape
Flexible cloud choice, familiar technology, no lock-in.
Their own multi-tenant cloud, that’s as easy as Azure.
Simple, automated operations.
More effective utilization of existing hardware assets.
Tenant choice and dynamic control.
Commodity and custom cloud offerings.
Integration with LOB systems.
In your datacenter
Why choose the Windows Azure
Simple and cost-effective
Open and interoperable
• Builds on a familiar foundation of
Windows Server and System Center.
• Simple service delivery for multi-tenant
• Easy VM and Web application
• Isolated virtual networks for
• Out-of-box infrastructure and
application service offerings.
• Private, hosted and public cloud.
• Extensibility and integration.
• Standardized service provisioning using
• Windows Azure code running in
• Highly scalable virtualization and
• Broad application platform support
including .NET, node.js, PHP.
• Automation platform.
• OData REST API for portal level
• Advanced Windows Server 2012
features on standard hardware.
• Service Bus for asynchronous
distributed application integration.
A particular slide catching your eye?
Clipping is a handy way to collect important slides you want to go back to later.