VMWARE Professionals - App Management


Published on

Published in: Technology, Business
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Goal of the slideRepresent how System Center 2012 simplifies application provisioning for private clouds by enabling a standardized approach.Talking points <Click> Through service templates, System Center 2012 – Virtual Machine Manager offers you the ability to define standardized application blueprints, which can be used to automatically deploy application services to shared resource pools, thus simplifying application provisioning. Defining your application requirements with a repeatable construct like service templates makes provisioning faster and less error-prone than when you manually have to deploy the application.Service templates provide the blueprint for the application service, including specifications for the hardware, operating system, and application packages. System Center 2012 supports multiple package types for .NET applications, including MS Deploy for the web tier (IIS), SAV for the application tier, and SQL DAC for the data tier.Operationalizing service templates across your service-consumer and service-provider organizations will likely require active collaboration between the App Devs, App Ops, and DC Admin roles to discuss and standardize the initial set of hardware, OS, and app profiles that new applications could adhere to. It might be beneficial to take an incremental approach to testing this capability before rolling out across a broader set of applications. This process will likely require broad sponsorship across the LOB application IT and infrastructure IT organizations.<Click>Once organizationally approved application blueprints are established and stored in the Virtual Machine Manager service template library, your application owners are ready to deploy applications on their own. They can go to the application owner self-service experience in System Center 2012 - App Controller, where they can access and select service templates that they’ve been authorized for. They can easily specify configuration requirements like application topology, scale-out rules, health thresholds, and upgrade rules into the service template and then kick-start a “one-click deployment.” Before the application owner hits deploy, App Controller provides a compelling visualization of the of the holistic application service, including all the requested service tiers, the underlying virtual instance templates, and storage, compute, and network resources. This enables application owners to “think services, not servers” by offering a “service-centric” approach to provisioning.Virtual Machine Manager uses the service template specifications to build out the application tiers, including the various logical instances associated with each tier. In the real world, you are likely to encounter scaled-out (or multi-instance) web front ends and application tiers, but scaled-up (or single instance–based) database tiers. Virtual Machine Manager uses the service template specifications to help ensure that the application is deployed to the appropriate virtualized resource pools.
  • Become source of truth and starting point for servicesSpecify machine and connectivity requirementsLink to deployed servicesEnable servicing of deployed instancesDeploy a set of virtual machinesCompose virtual machines through machine definitionsSupport for native application types: Web applications, Server App-V packages, and SQL applications (SQL DAC)Breakdown of how the Service template comes together to deliver the application.Infra and Fabric will cover the HW profile and OS profile content.TemplateStarting point for services and source of truthSpecifies machine and connectivity requirementsDeployed services are always linked to their templatesEnables servicing of the instancesInstanceGroups of machines that work togetherIncludes machine definitions as well as applicationsNative application types:Web Applications (WebDeploy)Virtual Applications (Server App-V Package)Database Applications (SQL DAC)
  • Deploying SQL within a Service Template is a key workload that we make very simple.Starting with the Deployment of SQL Server within a template, we are looking here for consistent SQL configurations, with the ability to define the Named Instance, product key to use.We can set the Configuration details for the SQL installation, from the media source locations, to the security mode and network connectivity configurations. You can even include a SQL configuration file.For the applications, we can configuration the Connections. This includes the dacpac connections, the Instance definition, Authentication mode and what behaviors will be followed when upgrades or migration are required.And finally we can capture the Service Account information, which accounts are used for each of the SQL Server services.
  • Now in Virtual Machine Manager 2012 is a feature called Server App-V.Server App-V provides the capability to capture server based applications into isolated ‘bubbles’ and allow us to deploy applications in the same way, every time.Capturing the application using App-V is called Sequencing. This is where you start with a blank server OS with just the Server App-V agent, and you run through an installation of the application. Server App-V captures all the files, registry settings, services and other associated application resources that are installed, and packages them into a file.Once the application is packaged, it can be included as part of a Service Template and deployed as part of a service, and dynamically configured at deployment time with instance specific information.We provide the ability to update the application through service templates, you simply edit the service template and save as a new version number, include the updated application resources and this can then be used to update the running services.And then we make the applications highly available by providing the ability to update the underlying OS whilst retaining the Server App-V state.
  • Lets have a deeper look at the virtualization of applications with Server App-V, and the attributes of applications that are suitable to be captured into Server App-V ‘bubbles’.Server App-V supports Windows Server Hosts, including the dynamic registration of services with the Service Control Manager, web applications running on IIS and the various types of user accounts – LocalSystem, Network Service and Domain accounts.We can capture and package most application resources as they are installed, including the application files, COM+ and DCOM functions, WMI providers and local users and groups.And we can capture all the application components including things like registry settings, and .Net components, and we can also capture some Java apps. Stateful information is stored on the local disk, and is retained when you upgrade an application or replace the underlying OS.Additional notes on SAV:OS support: Server App-V supports server OS platforms only. Both x86 (where applicable) and x64 versions of Windows Server 2003, Windows Server 2008,Windows Server 2008 R2 and Windows Server 2102 are supported. All editions are supported with one exception – The Server Core edition is supported for Windows Server 2008 R2 and Windows Server 2012 only. Server App-V features for application virtualization supportIIS Server App-V supports applications that install web sites, virtual directories and application pools. With Server App-V, you can easily virtualize applications that create these components on IIS 6.0, IIS 7.0,IIS 7.5, and IIS 8.0. Windows Services Many server applications install Windows services. With Server App-V, you can sequence an application which creates Windows services. When a virtual package is deployed to a server, you will see the same services in the Windows Service Control manager as you would see with a native installation of the application. COM/DCOM/COM+ Server App-V Sequencer captures COM/DCOM/COM+ components created by the application installer. These components are registered during deployment time so that they can be consumed by other applications or processes. You can also see these components with tools such as dcomcnfg. WMI Many datacenter applications create WMI components such as WMI providers or classes during the application installation. With a Server App-V virtualized package, you won’t miss any of these components when the package is deployed! Local users and Groups Unlike desktop applications, it is common for datacenter applications to create local users or groups as part of the installation process. Many files also contain references to user or group security IDs to restrict access to certain users and groups. Server App-V is capable of capturing local users and groups created during sequencing of the application and recreating them at deployment time. Any references to the SIDs are also maintained automatically. SSRS In Server App-V, we built a special component to handle the virtualization of applications that install SQL Services Reporting Services as part of the installation process. Therefore, if your application uses SSRS, you can use Server App-V to virtualize it! Server App-V does not support applications that install the following components: Drivers If your application installs drivers, Server App-V won’t install the drivers on deployment machines. Some applications have drivers that are installable separately. If this is the case for your application, you can install the driver first and then sequence the application with Server App-V sequencer. Before you deploy the package, you should also install the driver to the deployment servers. SharePoint Server App-V does not support virtualization of SharePoint or virtualization of an application that installs SharePoint as part of the installation process. If your application uses SharePoint, check if it can connect to an external SharePoint server. If so, you can virtualize the application without installing SharePoint during sequencing process. SQL Server Server App-V does not support SQL Server virtualization. If your application requires SQL server, you will need to point the installer to a previously deployed SQL server instance on another machine during sequencing time and update the deployment configuration information to point to an appropriate (again, previously deployed) instance at deployment time. It is always difficult to draw a line between what applications can be virtualized and what applications can’t be virtualized. The Server App-V Sequencer is designed to report any issues that are encountered during sequencing process. Therefore, after your sequencing finishes, it’s always a good idea to check the Server App-V Sequencer report and verify that your package is functional with the Server App-V cmdlets
  • Use the Ribbon for contextual actions within the Service DesignerUse the Designer Canvas to build your service template from Virtual Machine templates, Logical Networks, and Load BalancersSet service related properties such as Cost Center, Description, Release NumberTemplate is a starting pointAuthor the template in the new Service DesignerDefines machines and their connectivityTiers, Hardware, Logical Networks, OS, Apps, Load Balancer templates etc.Deployed services are always linked to their templatesTypically information like hosts or load balancers is not available while creating templateWhat customization you can do within a VM TemplateWindows Server 2008 R2 Roles and FeaturesArbitrary script execution and payload deliveryMultiple entry points (e.g. prior to any application install operation, after a specific application install operation)“First-Class” application deploymentWeb DeployServer App-VSQL Data-Tier ApplicationsConfigurable service settingsDefer setting a value until deployment timeUse @Variable Name@ nomenclature (e.g. @SQL User@)
  • Preview Pane shows view of service deploymentSettings all you to set deployment specific variablesRibbon for deploy activity or to check deployment ratingsPrepares the template for deploymentSpecify OS settings Computer name, Admin password etc.Specify Configurable Service Setting Values e.g. SQL connection string, script parametersAllows usage of same template in different environments Development, Staging, Production etc.
  • Perform operations at the service level, tier level, or on an individual VMView specific service, tier, or application settings
  • Deployed service viewed as Distributed Application in Operations Manager and Business Service in Service Manager.Monitored at the tier level.Viewed at the virtual machine level.
  • Full Animation and items grouped
  • With the private cloud, you want to ensure that the correct people have access to the resources that that you control. To accomplish this, we have created access control capabilities to give you fine-grainedThe Administrator and Delegated Administrator has full control to the underlying Infrastructure and all of the fabric. While the Administrator has access to the entire VMM environment, the Delegated Administrator has the control over the delegated host groups assigned.The Self-Service User will have access to just clouds, and there you can set revocable actions in a quota controlled environment. This gives you the ability to specify what actions these users can do and how much of the cloud resources they can consume.
  • Talk track – Granular and revocableThis isn’t the whole list just some…Integrate into slideAn application owner authors the service template and then shares that template with his team to deploy the application.Shareable ObjectsProfiles (Hardware, Guest OS, Application, SQL)Templates (VM, Service)Virtual machine Service
  • Controlled empowerment is about the service provider publishing requests that the consumers can access but then the user being able to initiate when they want service on their terms. It helps express IT requests in business language. Quite often things are lost in translation between what IT is asking for and what the user is wanting. And this allows us to start bringing this together and deliver a consistent experience that users can request but then IT, or the IT service provider gets the information they need in order to fulfill that request. And they get it in a consistent manner, each time, every time. The key thing is this allows the consumer to choose what level of service + cost that they want. So you might deliver and offer, say, a silver, gold and bronze level of service, specifying what the time-frames are, what the cost structure is for each option for them to select. It gives the user the empowerment to choose what they want based on what level of service they are after and the cost that is associated with it. And you give them the power to choose as opposed to users, quite often, asking for everything, but expecting to pay nothing. And we can standardize what they experience.
  • So, service offering is a type of work item within the CMDB that identifies and classifies a standard IT service. A service offering will contain one or more request offering. And it provides a consistent delivery of service-related information. Within a service offering, you will have information about knowledge articles associated with that service, service level agreement information about what you can expect from a response and fulfillment time-frame, and also cost and charge back related information.
  • Role-based is really about giving the users the ability to see what they are allowed to see based on their role. And this is dynamic within the service catalog. It is based off of Service Manager groups which are mapped back to Active Directory and it is dynamic. So what ever your group allocation is inside Service Manager again mapped back to Active Directory permissions, you will see the associated request offerings within the service catalog based on that.
  • The last one is one of the things I’m really excited about – the Simplified Portal. With system Center 2012, specifically Service Manager 2012 with the new suite, we actually reintroduced and rebuilt a brand new portal that is Silverlight web parts hosted in SharePoint foundation 2010 or higher. And it allows us to customize it. So you can tailor this portal using standard out-of-the-box SharePoint administration tools. And you can extend it using custom web parts. But it used SharePoint and Silverlight to host this. The main thing is this is customizable look and feel, but all of the forms in here are dynamically generated based on the request and service offerings that we configure that I talked about previously. So when we configure a new request offering in service offering, Service Manager goes through, picks those up, and manages them for us in a very easy to navigate format.
  • Now with Service Manager 2012 and the new System Center 2012 suite, we are actually bringing some out-of-the-box service and request offerings to our end users. And this contains request capabilities for requesting private cloud capacity and then virtual machines within private clouds.So we’ve going to provide best practice knowledge and automation for the service catalog that gets embedded into the service catalog that Consumers can take advantage of very quickly to maximize their time to value. And we call this the System Center Cloud Services Process Pack. Internally, this is what was known as project Andy. If you are familiar with that, it is now called the System Center Cloud Services Process Pack.This replaces SSP 2.0
  • Enabling Application Self Service requires an organization to think about how they wish to do this. What is delegated, to whom, what are the policies and procedures that surround this and so on.Lets take a look at 3 key capabilities that System Center 2012 delivers.Delegation with Control – Delegating out to Application Owners can be a daunting task for IT, and represents the intersection of 2 worlds – the Datacenter Admin and the Application Owner. So we introduce the concept of “Delegation WITH Control”. IT can delegate out access to resources such as service templates, but enforce control through quotas and any change procedures and release management requirements.Empowering Application Owners – based on the users identity and the resources delegated, they can perform their tasks and duties of associated with being an application owner. They can also manage the resources related to the application.Single Management Point – System Center 2012 provides, through App Controller the ability to manage both private cloud services and Windows Azure applications within a single web console. They can also see job status and task progress, as well as auditing is provided by capturing all actions.
  • For managing the service, we leverage the template model where we can update elements in the template and apply them to the deployed services. The template is the “Source of Truth” for the service and as we make modifications and publish those changes, we can see which services need to be updated. We use the concept of “Upgrade Domains” to provide the ability to update the tiers while helping to keep the service available.We have two types of updates, In Place Updates and Image Based Updates.In Place Updates – Update the application or virtual machine specifications which can be applied to the existing version off the OS. Update the Service without replacing the OS imageImage Based Updates – Update to the OS can be applied by replacing the OS underneath the application. With this, the process would be to lift up the application, saving the state, replace the OS, drop the application back onto the updated system and then restore the state.
  • [Click] – The template for the VM is the “Source of Truth” for the VM.[Click] – We can now deploy some services off of that template.[Click] – We want to make updates to the application and apply them to the template. In this example we are making updates to the application running in the middle tier.[Click] – After updating the template we can now “Set” the template which allows us to correlate the services that used the older template. Once the template is set, the Service moves into a “Pending Service Update” mode.[Click] – At this point you can apply the changes and the service is now running with the updated application.
  • [Click] – The template for the VM is the “Source of Truth” for the VM.[Click] – We can now deploy some services off of that template.[Click] – We want to make updates to the application and apply them to the template. In this example we are making updates to the application running in the middle tier.[Click] – After updating the template we can now “Set” the template which allows us to correlate the services that used the older template. Once the template is set, the Service moves into a “Pending Service Update” mode.[Click] – At this point you can apply the changes and the service is now running with the updated application.
  • Baselines Can be assigned toHosts, Host Groups and Host ClustersVMM Server Roles – Library Server, PXE Server, Update Server, VMM ServerBaselines Can NOT be assigned toVMs (running or stored), VHDs in library RemediationAutomated workflowPut a node in maintenance modeVMM maintenance mode can trigger SCOM maintenance modeEvacuates the node using Live MigrationUser can override this to save state the VMs on the nodeInstall missing updates based on baselines assigned Take the node out of maintenance modeGo to next node and repeatSupport for Windows Server 2008 as well as Windows Server 2008 R2 clustersScriptable using PowerShell
  • Transition:Another way that System Center 2012 enables you to meeting your fabric SLA is by ensuring your virtual resources are up to date.Traditional update engines like System Center Configuration Manager aren’t cluster-aware. They’re likely to push out patches to all hosts simultaneously, disrupting cluster availability.VMM 2012 can integrate with a dedicated 64-bit Windows Server Update Services (WSUS) 3.0 SP2 server and will orchestrate cluster patching by migrating VMs to other hosts in the cluster, patching the node and rebooting if required. It will repeat the process on the next host until the whole cluster is up-to-date.You can define update baselines with lists of required updates. VMM will then scan hosts to determine compliance, and finally apply patches to bring them current. You will have the option to exempt particular hosts if a patch turns out to cause instability.Talking Points:The feature requires a pre-existing, dedicated, root WSUS 3.0 SP2 64 bit server. If the WSUS server is remote, the WSUS console is required on the VMM server. It supports WSUS in SSL mode.A scan is then conducted to see if the server is compliant or not for the assigned baseline. VMM leverages WUA for applicability and compliance. Scan is on demand and automatable using PowerShell. VMM then makes the server compliant by installing missing updates. Update installation progress can be tracked in the VMM console and remediation is on demand and automatable using PowerShell.Virtual Machine Manager provides a feature by which you can manage updates for your virtual machine hosts, library servers, PXE servers, the Windows Server Update Management (WSUS) server, and the VMM server itself in the VMM console. Enable featureIn VMM, use Add WSUS server wizard to select and add WSUS server and then synchronize with the latest updates. VMM gets a catalog of updates from the update server. It points the fabric servers to the correct update server, i.e. configures the WUA agent on each fabric server.Create BaselineAfter you enable update management in VMM, you are ready to prepare for patching by configuring update baselines. An update baseline contains a set of required updates. The baseline is a logical grouping of updates to assess compliance. VMM provides two sample baselines for Security and Critical updates. You can assign the baseline to hosts, host groups and host clusters, plus VMM server roles (library server, PXE server, Update server and VMM server). You cannot assign it to VMs (running or stored) or VHDs in the library.Scan ServersDuring a compliance scan, computers that are assigned to a baseline are graded for compliance to their assigned baselines. After a computer is found noncompliant, an administrator brings the computer into compliance through update remediation.<click> Remediate serversIf computers are found to be non-compliant, remediation can be performed. When you perform update remediation on a host cluster, VMM orchestrates the updates, in turn placing each cluster node in maintenance mode, migrating virtual machines off the host by using intelligent placement, and then installing the updates. If the cluster supports live migration of Windows Server-based virtual machines, live migration is used. If the cluster does not support live migration, VMM saves state for the virtual machines and does not migrate them.Outline for talking pointsManaging BaselinesCreate baselineLogical grouping updates to assess complianceTwo sample baselines for Security and Critical updatesAssign baselineChoosing the servers to assess complianceCan assign toHosts, Host Groups and Host ClustersVMM Server Roles – Library Server, PXE Server, Update Server, VMM ServerCan NOT assign toVMs (running or stored), VHDs in library ScanCheck if the server is compliant or not for assigned baselineVMM leverages WUA for applicability and complianceScan is on demand and automatable using PowerShellRemediateMake the server compliant by installing missing updatesUpdate installation progress can be tracked in VMM consoleRemediate is on demand and automatable using PowerShellNotes:- Requires WSUS 3.0 SP2 64 Bit server- Requires WSUS console on VMM server if WSUS server is remote - Supports WSUS in SSL mode- Share a WSUS root server between SCCM and SCVMM -NEW- Utilize an autonomous SCCM WSUS DSS server at the integration point -NEW- Enable centralized reporting via SCCM reporting -NEW
  • System Center 2012 provides the complete application monitoring solution!Operations Manager has done infrastructure monitoring for many years and we have an incredibly strong base on which to build. And this is very important, after all, you can’t know if an application is performing correctly without understanding if the underlying platform is also performing as expected.Application monitoring builds on infrastructure monitoring in 3 ways:Server-side – this is the monitoring at the back-end of what is being executed by the application and delivered to the end user.Client-side – understanding and capturing the end user experience with the application, including how long things take to load and execute, network latency and any client-side scripting exceptionsSynthetic transactions – these are pre-recorded testing paths through the application. You record the steps you want to complete, and then these are run on a regular schedule to ensure that the application is functional and available.
  • What is Discovered:ConnectivityServer to Switch, Switch to SwitchVLAN membershipHSRP groupsStitching of switch ports to server NICsKey components of a devicePorts/InterfacesProcessorMemoryWhat is Monitored:Port/Interface Up/Down (operational & admin status)Volumes of inbound/outbound traffic% UtilizationDiscards, Drops, ErrorsProcessor% UtilizationMemoryIn depth memory counters (Cisco Only)Free memoryConnection HealthBased on looking at both ends of a connectionVLAN Health Based on health state of switches in VLANHSRP GroupBased on health state of individual HSRP end pointsWhat is VisualizedNetwork Summary Network Node Network Interface VicinityWhat is ReportedMemory UtilizationProcessor UtilizationPort Traffic VolumePort Error AnalysisPort Packet Analysis
  • So the first thing we want to take a look at is opening up this conversation at the start of the “Configure and Deploy” and the “IT as a Service” slide at the front we look at the application owner and the datacenter admin and how those two have different sets of personas with quite different views on the world in terms of what should and shouldn’t be and how things are run and come together. So here we have and end-user or application owner who’s saying, “Well my application is running slowly!” The network though might look fine. We’re green across the board, everything is ok, there’s no problem whatsoever from the network guys. We have the developers saying “The code passed all testing,” there’s no bugs, there’s no crashing, everything is fine. And then we have the infrastructure monitoring guys saying, “The servers are all running fine,” there’s no problem here whatsoever. However, the end-user is still having a poor performance. This is actually a fairly common scenario. And the way this comes together is that the server-side availability monitoring shows the application is functioning just fine. This is where we get into the difference between availability and monitoring vs. performance monitoring.From an availability point of view, its highly available, we have green across the board, but that doesn’t really show the true state, which is from the client side it is running slowly and there are some issues happening from the client side. Its highly available, but performing slowly. What we get with application performance monitoring is to understand exactly where the issue is, what piece of code is causing the problem and then the ability to share that information out. So let’s take a look at what this looks like.
  • Getting deep inside into the application performance is the key bit. When we look at the server side monitoring and we touch on this in the “Configure and Deploy” presentation, what we’re really doing here is we’re collecting data from the .NET calls, we’re reading the application methods, all the variable and parameters, the types of calls being made, the web methods, the internal execution the SQL commands and so forth, all of the information comes together.With the client-side, it shows the page load times, where the time was spent - was it spent loading images, CSS, etc. , was it a JavaScript exceptions, all that data is collected using a JavaScript injection on the page side. And what we get out of all of that is at the code-execution level information, where we can drill in and see exactly where was the time spent. What that means is that we can get rich visualization and a breakdown of how long did that transaction take, where was the threshold and what was the problem actually causing them.
  • From a response point of view, we can see in this screenshot, we have a threshold of 500 ms, however we’ve gone well over that and into the 1300 ms range. So if we drill into this more what we can see is that we expected the end user experience to be at 500 ms, however the end user experience was actually having an experience much longer than that, and then we can see what the impact of that end user experience actually is. Now when we start to think about System Center 2012 all up, we start to bring in the other components that come together around all of this information. So if you’ve configured Service Manager in the environment, the Operations Manager to Service Manager connector means that whenever this alert is raised in Operations Manager, then it will also go along the Service manager and raise an incident, that can then go through all of the different escalation paths - the developers, network and infrastructure guys - we can capture all of that knowledge and perhaps we have some automated remediation going on and when you resolve this issue on the Server Manager side, it will also close the alert on the Operation Manager side. And this is a key component of System Center 2012, in a way that all of the different components come together and light up these very rich scenarios where you got service manager, orchestrator and operations manager all coming together to do some very impactful application performance management.
  • You need to be able to report on what’s going on and Operations Manager Application Advisor provides rich reporting and trending information about the applications performance. You can easily gain quick visibility into the Top issues. You can see the screenshot – we’ve got 5 Top issues there and 76% of all of these resources is spent on the search service. So clearly there is a problem inside that. And really if want to boil this down into a nutshell conversation, this is where the application owner needs to spend their time. 76% of all performance issues are about that particular component, so this is probably where you want to see the developers do some work. Now part of understanding that, is when that runs, what are the things that are associated with that. And so we can understand the relationship between all of the different application components, that this menu search service is loading default.aspx for example, but you can also see how all of these different things around that get impacted. We’re impacting on the menus, the categories, etc. and so you start to get a bottleneck with an application, there’s a lot more under the hood. So understanding how all of that comes together is crucially important to really doing the reporting and trending and delivering on those SLAs as an application owner.
  • So when we look a the automation concepts, we are looking at the four areas The various activities – activities are really intelligent tasks that perform a defined action. There are a couple of examples there like we might run a .NET script, check a schedule, invoke a web service, query a database, or even send an email.A Runbook is a collection of activities that are pulled together in a workflow. This is a systems level workflow that will execute a series of linked automation activities. In this case, we have an example there of we create an incident which creates an incident record inside of either Service Manager or Remedy or HPOV Service Desk. We might then kick off a workflow that checkpoints a server, puts it into maintenance mode, shuts down the virtual machine, and then writes the update back to the incident record. It’s a simple Runbook but it allows us to take various activities and link them together and pass data between them.This gets into what we call the databus. The databus is used to publish and consume information as a Runbook executes. So if we have an activity where we need to get the idea of a server, and we get the necessary data sources, we can pass information from one activity to the next, to the next very easily and build off of each other so we may start with one piece of information at the beginning of our workflow and we collect additional information as we go through our workflow. When we get to the end, we have a large amount of information that can be used across that workflow. Now with these activities, we provide out-of-the-box standard activities that provide various functionality capabilities for users. And these different utilities that might be file level, interaction capabilities, running .NET scripts, running PowerShell commands, connecting to systems, querying databases. And I mentioned earlier that we ship a significant amount of data integration packs out-of-the-box.
  • So, in creating those Runbooks, we allow you to take those standardized activities and build those like you are building a Visio diagram. So we are talking about easy authoring and debugging. It is a drag and drop experience much like dragging and dropping and creating a Visio diagram to create your Runbook or create nested Runbooks with looping and branching to really automate your systems level activities and automate the decision process of the various activities within your Runbook. So you reach a point where you want to make a decision. If a certain condition is met, I go one direction. If it is not met, I go another direction. You can actually automate those into your Orchestrator Runbook. I mentioned the Databus. The Databus helps abstract developer level complexity from the Runbook author and enables us to do a hub-and-spoke integration model where we can take information and pass it back and forth to different solutions. And as I mentioned, we are shipping more than twenty different Integration Packs for System Center, Microsoft, and third party management tools.
  • Some of those standard activities to support things like delivering private cloud offerings are run system commands, perform schedules activities, down to send email notifications or other notification types if needed, manipulate a text file, and manage workflows.
  • A simple collection of the Microsoft Runbook Integration Packs across the System Center suite as well as Microsoft tools such as Active Directory.
  • And if that’s not enough, you can build your own. You can build and distribute your own integrations. We call it the Quick Integration Kit, QIK. That is a command line interface or a software development kit to actually build out your own integration packs. They are easy to build and to integrate and allows you to use either .NET or Java IDEs to compile DLL or JAR resource files to deliver your own integrations.
  • And if that’s not enough, you can build your own. You can build and distribute your own integrations. We call it the Quick Integration Kit, QIK. That is a command line interface or a software development kit to actually build out your own integration packs. They are easy to build and to integrate and allows you to use either .NET or Java IDEs to compile DLL or JAR resource files to deliver your own integrations.
  • So at the end of the day, we provide a lot, but we also provide you the ability to build your own if you need to. In building those integrations, most of those are delivered through XML files that expose Orchestrator runtime functionality and data. It also helps with reporting and is our external interface into System Center, and it is standards based so it operates with other tool sets.
  • So, when you think about integration, let’s look at it in this way, integration is about starting with Service Manager and Orchestrator, the process and the system integration capabilities. And we have a bi-directional connector between these two for automation activities from Orchestrator getting pulled into the Service Manager CMDB. And we can reference those as part of the service and request offerings and the activities that are needed to fulfill those.We then have the different tools within the System Center suite as well as Active Directory, Notification capabilities via Exchange,third party management tools and Line of Business applications such as Exchangewhere we pull in our configuration items and automation data to populate our CMDB from other System Center tools as well as from our Line of Business Applications such as Exchange for users and admins and our third party management tools as shown there. We then have the ability to, once we know the information about the pieces of our infrastructure, issue automation commands to the System Center, other System Center products, as well as third party tools, as well as Line of Business Applications, primarily through Orchestrator and in some cases through Service Manager via Operations Manager to actually go do something, to kick off an automation activity where that might be the start up of virtual machine or create a new virtual machine or create some private cloud capacity or create a new user inside a Line of Business Application or create a new processing compute environment inside say a third party management tool that’s going to touch out to an SAP environment.In addition to that, we have the notification and reporting aspects through Exchange, again either via System Manager or Orchestrator to issue notifications either outbound or inbound from a notification or accepting an approval via email notification, and then also through reporting, through dashboards and reports to be able to have visibility of how is IT performing, what are the services, how are we adhering to the service levels that we have established for our IT service that we are offering?
  • We start as Service Manager as the repository for our CMDB and then Orchestrator which provides us our automation engine. We have a bi-directional connector those automation activities to come into Service Manager, as well as for Service Manager to issue and execute those automation workflows within Orchestrator.  We then have our other external repositories, either System Center related,Line of Business application such as Microsoft Exchange (User and Admin which is enabled with SP1) or 3rd Party management tools and inbound connectors to pull in configuration items and automation data to be populated into our CMDB and be reconciled together so that we’re looking at single record for a piece of our infrastructure even though some pieces came from Virtual Machine Manager, Operations or Configuration Manager or even Active Directory.  Once we have our reconciled view of data within our infrastructure we can then do something with that. We have within the System Center suite a bi-directional interface through Orchestrator to issue automation commands to System Center products or 3rd party tools or Line of business application if you build your own integration pack using the SDK to actually drive automation within those tools, to respond to errors to deliver changes to manage changes with your infrastructure. Within SP1, we’ve added integration packs to enable Azure cloud management. And then lastly part of doing all this work we have to do two things: keep people aware and what’s happening and we got to be able to report on it. And we provide that inbound and outbound notifications capability through Service Manager and Orchestrator to Exchange as well as to our Service Manager data warehouse for dash boarding and reporting capabilities.
  • Let’s look at how we manage application across multiple clouds. This is a conversation that comes up a lot where our customers have some on-premise application and maybe they’re looking at Windows Azure applications and they need to understand, “Well how do I actually do this?” And this is how we do it. We have the two constructs here. We have private cloud on the left hand side, which is a Virtual Machine Manager cloud. And we have multiple hypervisors , we support Hyper-V, VMware and XenServer. And then we have a service template model that sits on top of that to deliver our applications.On the right hand side we have Windows Azure and/or a Service Provider, which has a package and configuration model, and we want to able to do is deploy, monitor and manage your applications , regardless of where they are running. The way we can do this is with App Controller. And App Controller enables us to manage both on the left hand side, a virtual machine manager service, and on the right hand side either Windows Azure application from within a single console. We can see information on what’s running, we can see how many instances are running, and so forth and we can also see go through and perform actions against these services as well deploy new application on both the private cloud and public cloud.
  • The System Center 2012 Service Pack 1 (SP1) version of Operations Manager can show you different perspectives of application health in one place—360 .NET Application Monitoring Dashboards. The 360 .NET Application Monitoring Dashboards display information from Global Service Monitor, .NET Application Performance Monitoring, and Web Application Availability Monitoring to provide a summary of health and key metrics for 3-tier applications in a single view. The 360 .NET Application Monitoring Dashboards show where an application is unhealthy and provide a launch point for detail dashboards that highlight component-level issues.The 360 .NET Application Monitoring Dashboards display data from powerful monitoring tools. .NET Application Performance Monitoring looks deep into the application to get details that can help you pinpoint solutions from server- and client-side perspectives. Web Application Availability monitoring in Operations Manager monitors internal synthetic transactions. Global Service Monitor monitors the availability of applications from an outside location, measuring availability from where the user is.
  • As an example of comprehensive monitoring and deep application insight, System Center 2012 SP1 enables a rich end-to-end scenario from creation of web tests to external “outside-in” monitoring, integrated on-premises monitoring and rich developer diagnostics in the case of any detected exceptions.On premises, you have developers working in Visual Studio 2012. Visual Studio enables developers to create web tests to validate that their applications are functioning correctly. A subset of these tests would be imported into Operations Manager, which is subsequently used to configure the Global Service Monitor Service.Global Service Monitor is a service hosted in Windows Azure Points of Presence around the globe that enables organizations to assess the real world performance and availability of their applications. Operations Manager instructs Global Service Monitor which application endpoints to invoke with the web tests along with a schedule. Once configured, Global Service Monitor calls the production web application according to the schedule and returns the results to the on-premise operations manager for display along side other monitoring data on the application gathered within the organizations private network.This capability ensures the broadest set of data to assess the health of the application and when intervention is necessary.When Global Service Monitor returns either an exception or response times that trigger an alert in Operations Manager, Operations Manager can schedule a work item to be added to the developers’ queue to be addressed by the developer.This level of integration helps to accelerate issue resolution and achieve SLAs.
  • Here you see the Global Service Monitor report screen in Operations Manager showing Where the tests are being issued fromWhich Point of Presence is detecting an out of order stateA list of test resultsResponse timesAlerts generated.
  • On this slide we talk about how the same product can be used to back-up and recover a wide range of Microsoft products. We have a few data points that we can leverage here as to how we have optimized our back-up and recovery solution:In SQL: We support any point in time recovery, we provide self service restoresUpto 2000 SQL Databases can be protected using a single DPM serverThe change tracking is super storage efficient.For Exchange:We provide protection against total loss due to logical corruptionsWe can preserve data for point in time restoresFor SharepointYou can protect at a farm level, but can do granular level recovery. Recovery of a document takes only a few seconds now.New databases are automatically detected and protectedFor HyperVYou can do item level recovery of VMsYou can back-up the entire hostYou can seamlessly protect live migration of VMsWe can back-up data from all these sources once every 15 minutes to a Tape or a Disk and you can save this back-up in an offsite location again on a tape or a disk or in the cloud.
  • What exactly is DPM’s Hyper-V protection for CSV R2? So we have efficient express full backups which means there is no owner node or non owner node performance penalty to have. We also support parallel backups as I already mentioned which means you can have multiple snapshots and multiple backups happening at the same time over CSV volume. The performance improvement in terms of the amount of time taken for executing these backups has improved 900% and that is something that we have delivered as part of SP1. Live migration, uninterrupted data protection within the cluster which is a key feature given that in most cluster environment migration is a key scenario. So how does it all happen? So DPM actually provides a filter that actually tracks changes that happen on a VM and all these changes are noted in a bitmap and these changes are tracked at the point and time when we take a snapshot in order to ensure that we transfer only the delta or the change to blocks from the VM into the DPM server. So that basically means that during backup or during creation of snapshot we basically read and transfer changed content from the previous snapshot so it’s very efficient and it also ensures faster backups which is also a key differentiator for SP1.
  • Coming to live migration, we support any migration whether it’s inter, intra, whether it’s stand alone to stand alone or between cluster to cluster or hybrid scenario from cluster to stand alone and we also provide security options whereby VMs can be recovered to the original host where the VM is actually currently running right. So now the key question here is how does DPM take care of mobility scenarios?Well the simple answer to that is that as part of supporting these mobility scenarios DPM interacts with VMM to find the location of the host where the VM is after migration. It then initiates a back up job on the new host if need be and in order to establish this communication DPM and VMM need to be configured with each other, that’s just a simple two step process of configuring the DPM machine account as an admin on the server and then installing the VMM console on the DPM server.
  • As I was talking about some of the scale improvements that we have done as part of SP1 this is the current scale number that we have supported for SP1. so DPM, 1 DPM server can support up to 800 VMs each of which are 100GBs each. We support any size clusters with a scale out model whereby we use multiple DPM servers that are actually bigger in number more than 800. we also provide the feature to exclude the page files in order to achieve efficiency it is a new feature that we have provided for whereby in order to achieve efficiency of backup storage we can exclude the page file during backups. Now, how is that accomplished, you can provide the page file to the DPM server and move it to a separate VHD like a scratch VHD and thereby DPM will back up only the page file as part of the initial application but not as part of the subsequent delta application.
  • Now the question then comes as to what is the scale out story for DPM given that Windows 2012 supports a scale of 4000 VMs for a 64 node cluster. Well DPM has a scale out model whereby we can support a 64 node cluster with 4000 VMs with 5DPM servers in other words each DPM will protect 800 VMs and the 5 DPM servers together will protect 4000 VMs on a 64 node cluster. Multiple DPM servers can protect the same cluster and therefore there is no requirement that you assign one DPM to one cluster as long as you know the scale numbers are evenly distributed across the DPM servers then we are good.
  • So it’s very integrated into the current workflows, it is a new online service so the ability of DPM as a cloud facing application has also been added as part of this, so how does it work? As I said someone subscribes to an online service, he deploys the DPM-A bits on the DPM server, registers DPM with the online service and then he can go about creating backups just like he would in the private cloud space. So it’s a seamless workflow that is also Azure facing.
  • VMWARE Professionals - App Management

    1. 1. Microsoft Virtual Academy
    2. 2. Microsoft Virtual Academy Part 1 | Windows Server 2012 Hyper-V &. VMware vSphere 5.1 Part 2 | System Center 2012 SP1 & VMware’s Private Cloud (01) Introduction & Scalability (05) Introduction & Overview of System Center 2012 (02) Storage & Resource Management (06) Application Management (03) Security, Multi-tenancy & Flexibility (07) Cross-Platform Management (04) High-Availability & Resiliency (08) Foundation, Hybrid Clouds & Costs ** MEAL BREAK **
    3. 3. Microsoft Virtual Academy
    4. 4. Standardized VM Templates Roles & Features Application Layers VM Templates 2.0: Service Templates Construction, Delivery & Consumption
    5. 5. Standardized VM Templates Roles & Features Application Layers VM Templates 2.0: Service Templates Deployment into clouds Construction, Delivery & Consumption
    6. 6. Standardized VM Templates Roles & Features Application Layers VM Templates 2.0: Service Templates Deployment into clouds Role-based Self Service Controlled Consumption Construction, Delivery & Consumption
    7. 7. Application Construction, Delivery & Consumption Capability Microsoft VMware Request Private Cloud Resources Yes Yes1 Role-Based Self-Service Yes Yes Standardized Templates Yes Yes2 Template Granularity: Roles / Features Yes No Template Granularity: Application Layer Yes Yes3 Service/Multi-Tier Templates Yes Yes3 Deployment Across Heterogeneous Clouds Yes Yes4 1. vCloud Automation Center allows for the requesting of private cloud resources but lacks a true CMDB capability in box. 2. Each VMware VM template will have it’s own VMDK, even if the template varies only slightly in it’s configuration options. 3. No alternatives to Server Application Virtualization (App-V) thus relies on regular installation methods or inflexible scripts. 4. vCloud Automation Center allows deployment onto non-VMware infrastructure at a cost of $400 per managed machine + S&S however once deployed, it could not be managed from vCloud Director along with other VMware-based VMs. VMware Information: http://www.vmware.com/products/datacenter-virtualization/vcloud-automation-center/features.html, http://www.vmware.com/files/pdf/management/vmw-vcloud-automation-center-faq.pdf
    8. 8. Centralized Maintenance Maintenance, Management & Monitoring
    9. 9. Centralized Maintenance Deep Application Insight Maintenance, Management & Monitoring
    10. 10. Centralized Maintenance Deep Application Insight Connecting DevOps Maintenance, Management & Monitoring
    11. 11. Centralized Maintenance Deep Application Insight Connecting DevOps Service Delivery Automation Maintenance, Management & Monitoring
    12. 12. Centralized Maintenance Deep Application Insight Connecting DevOps Service Delivery Automation Extends beyond the private cloud Maintenance, Management & Monitoring
    13. 13. o 67
    14. 14. !
    15. 15. Application Maintenance, Management & Monitoring Capability Microsoft VMware Centralized Patching & Maintenance Yes Yes Non-Virtualized Infrastructure Management Yes Yes1 Integrated Service Management Yes Lacks CMDB2 Heterogeneous Automation Yes VMware Centric3 Deep Application Insight Yes Yes4 Integrated Dev-Ops Yes No5 1. Would require purchases outside of the vCloud Suite including vCloud Automation Center, vFabric Hyperic, vCenter Operations Management Suite Enterprise Edition 2. vCloud Automation Center enables application owners or administrators to request infrastructure but vCAC lacks any form of true CMDB for complete ITIL/MOF IT Service Management 3. VMware's vCenter Orchestrator has a limited set of plug-ins, of which the vast majority are VMware centric. No mention of plug-ins for other enterprise management systems and tools such as those from HP, IBM, BMC etc. 4. Remediation limited to VMware best practices thus lacking in application-specific remediation guidance 5. Lab Manager deprecated, with customers expected to upgrade to vCloud Director, which has no connections with Development IDE. VMware Information: http://www.vmware.com/products/datacenter-virtualization/vcloud-suite/compare.html, http://www.vmware.com/products/datacenter-virtualization/vcloud- automation-center/overview.html, http://www.vmware.com/products/datacenter-virtualization/vcloud-automation-center/buy.html, http://www.vmware.com/products/application-platform/vfabric-hyperic/buy.html, https://solutionexchange.vmware.com/store/categories/21/view_all, http://www.vmware.com/products/labmanager/overview.html
    16. 16. Protection of Key Applications & Workloads Capability Microsoft VMware Granular Workload Protection Yes No1 Physical & Virtual Protection Yes No1 3rd Party Integration Yes No2 Centralized Role-Based Management Yes Yes3 Tape Backup Yes No4 Integrated Disaster Recovery Yes Yes 1. VMware Data Protection offers no protection for the workloads within the virtual machine, simply focusing on the VM itself as the protection unit and offers no protection of physical machines 2. VMware Data Protection is not extensible by 3rd parties 3. VMware Data Protection is capped at 10 appliances per vCenter with a maximum storage of 2TB/100 VMs per appliance. 4. VMware Data Protection offers no protection to tape media. Disk only VMware Information: http://www.vmware.com/files/pdf/techpaper/Introduction-to-Data-Protection.pdf, http://pubs.vmware.com/vsphere- 51/topic/com.vmware.ICbase/PDF/vmware-data-protection-administration-guide-51.pdf
    17. 17. ©2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, Office, Azure, System Center, Dynamics and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.