Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Talk Through Sogeti ALM 4 Azure


Published on

Talk Through for the VS2010 ALM MTLM usages patterns for Windows Azure hosted application development persentation

Published in: Technology
  • Hi Clemens,
    Interesting presentation. Microsoft mentions that the Azure compute emulators behavior isn't 100% identical to Azure. This gives thought to wether the compute emulator should be used for testing. What is your opinion about that?
    Are you sure you want to  Yes  No
    Your message goes here

Talk Through Sogeti ALM 4 Azure

  1. 1. INCREASE PRODUCTIVITY AND SOFTWARE QUALITY WITH AZURE AND VS2010 ALM F a s t e r , b e t t e r w i t h h i g h e r q u a l i t y d e s i g n , d e v e l o p , b u i l d , t e s t a n d d e p l o y Az u r e c l o u d applications with Application Lifecycle Management.AZURE PROVIDES NEW OPPERTUNITIES FOR BUSINESSES. MANY ORGANISATIONS ARE STARTING TODEVELOP CLOUD APPLICATIONS. MICROSOFT VISUAL STUDIO 2010 AND TEAM FOUNDATION SERVER2010 ARE THE APPLICATION LIFECYCLE MANAGEMENT TOOLS TO DEVELOP AZURE APPLICATION.THIS PAPER DISCUSSES, HOW APPLICATION LIFECYCLE MANAGEMENT PROCESSES AND TOOLS CAN BEUSED IN AN OPTIMAL WAY, BY MAKING USE OF THEY KEY CHARACTERISTICS OF THE AZURE PLATFORMTO RAISE THE PRODUCTIVITY AND QUALITY OF SYSTEMS DEVELOPED FOR THE CLOUD. ALM 4 Azure ALM 4 Azure a presentation covering different levels of ALM automation for Azure cloud services development. An important notice: the topic covers modern software development tools and practices which you need to know while developing systems which run on the Azure cloud (4 Azure). This is something completely different, but related, than ALM on Azure, where we use the tools out of the cloud (SaaS) to execute a modern system development. You can use them together ‘on’ and ‘for’ Azure, but this presentation only covers the ‘for’ azure. How do we need to execute our practices and use our on premise tools for cloud services? What is Application Lifecycle Management, why do we want it and what are the goals we pursue. What are the specific characteristics of Azure and Cloud Computing. Where can these characteristics help us to reach the goal and where do they give us some challenges. The main topic in the agenda are 5 different ALM 4 Azure scenario’s and supporting technologies explained.
  2. 2. Application Lifecycle Management All tool vendors and methodologies have their own definition of Application Lifecycle management. What can we learn from these definitions? A wide variety in scope. ITIL is focused on the operational side of ALM, the Wiki and Forrester descriptions are more focused on the Software Development Lifecycle [SDLC] and Microsoft takes a bigger scope with business, development and operations, although the tooling and the assessment are focused on SDLC. Borland is also talking about a wider scope, when you look at the RUP like model. But the main pro is their focus on "many processes and many tools" so it should fit more then one environment. Beside this difference in scope, everybody agrees on terms like: measurable, predictable, traceable, manageable, monitored etc etc... smells like "in control" We use this image when defining ALM and it is about "in control" but it is even more about communication. It is about how the different ALM roles who are all responsible and accountable for the success of a software development project communicate in a seamless manner. When talking about Application Lifecycle Management [ALM] terms like accountability, governance and compliance are used. All of them refer back to “working together”, how do we work together during the application lifecycle. ALM is not about tools, it’s about working together. Working together seamless and flexible while staying in control, being measurable and responsible. All the roles in the application lifecycle have a part in this collaboration effort. Tools can help but isn’t’ the core driver. There are lots of examples of wrong interpreted business requirements, miss communication between development and test, applications which won’t run in production, operations who don’t understand the applications. All of them resulting in more work, more faults, more costs or even only costs and no application because the project was unplugged. Most of these projects faults, these extra costs are a slip in communication. Having a strategy how people have to collaborate within the lifecycle is one important piece of Application Lifecycle Management. Most organizational parts have already some kind of process / methodology in place. Having an approach how to share information and share ideas from their own role point of view through the other roles is a key success factor for lowering the costs and raises the business value of IT investments. Tools can help with this goal. Having gear in place which supports and stimulates collaboration is a driver for a successful Application Lifecycle Management. But without a plan, how people should collaborate and communicate, tools are useless. Creating new modes of collaboration supported by technology can only be done by addressing the human aspect. More specifically, we need to address some of the worries and obstacles people encounter when collaborating using technology. The three most important concerns are:  Trust. Trust is a condition for social interaction. People will only work with people, companies, tools and information they know they can trust. Before we can expect collaboration to take off online, there must be a way for people to get this “trust.” And a topic closely associated with trust when it refers to people is Identity.  Collaborative culture. If one individual is the greatest collaborator in the world, he or she is probably not getting anywhere. Only when all people involved are part of the same collaborative culture will new levels of creativity and productivity be reached. A collaborative culture consists of many things, including: o Collaborative leadership; o Shared goals; o Shared model of the truth; and o Rules or norms.  Reward. Changing the way people work takes effort, so it must be
  3. 3. clear for the parties involved what they will gain, at a personal level, from collaborating in a new way. Surprisingly, a “reward” for successful collaboration is most often of a non-financial nature. When working together with work packages, seamless communication need to address these challenges. All the different roles in the Application Lifecycle create artifacts, product also these products need to work together, they need to fit, a single point of truth. Requirements, designs, the business case, the test cases, the source files and the operational information… all need to work together as one consistent product. When one gets out of sync then the involved roles should get a notification. Tools can help with this. The Visual Studio 2010 family is made up of a central team server, and a small selection of client-side tools. The team server—Team Foundation Server 2010—is the backbone of the application lifecycle management, providing capabilities for source control management, (SCM), build automation, work item tracking and reporting. In this release Microsoft expanded the capabilities of Team Foundation Server by adding a true test case management system and extended it with Lab Management 2010—a set of capabilities designed to better integrate both physical and virtual labs into the development process. On the client-side for developers, you can choose between Visual Studio 2010 Professional, Premium or Ultimate. For testers and business analysts there is Test Professional—a new integrated test environment designed with manual testers in mind. For those people who participate in the development efforts, but for whom Visual Studio—the IDE—is not appropriate, including Java developers, project managers and stakeholders the Team Foundation Server extensibility model enables alternative interfaces. These include both Team Explorer—a standalone tool built with the Visual Studio shell—and Team Web Access. These tools enable anyone to work directly with Team Foundation Server. And there is cross-product integration capabilities with Microsoft Office® and Microsoft Expression and SharePoint Server with new SharePoint dashboard.Azure Windows Azure™ is a cloud services operating system that serves as the development, service hosting and service management environment for the Windows Azure platform. Windows Azure provides developers with on- demand compute and storage to host, scale, and manage web applications on the internet through Microsoft® datacenters. Windows Azure has several unique characteristics as a platform. 1. Hosted services allow deploying to two identical but independent environments. The stating environment and the so called production environment. When you deploy a service you can choose to deploy to either the staging environment or the production environment. A service deployed to the staging environment is assigned a URL with the following format: {deploymentid} A service deployed to the production environment is assigned a URL with the following format: {hostedservicename} The staging environment is useful as a test bed for your service prior to going live with it. In addition, when you are ready to go live, it is faster to swap VIPS to move your service to the production environment than to deploy it directly there. 2. Guest OS Versions are identical for every instance. In the configuration of the hosted service the OS version is set and every instance will be built of this same image. This results in the very hard to accomplish on premise situation where, test, acceptance and production environments are identical. The Windows Azure guest operating system is the operating system that
  4. 4. runs on the virtual machines (VMs) that host your service. The guestoperating system is updated monthly. You can choose to upgrade the guestOS for your service automatically each time an update is released, or you canperform upgrades manually at a time of your choosing. All role instancesdefined by your service will run on the guest operating system version thatyou specify. Don’t assume your state is safe. Instances, VM’s are recycled on a ‘for us’random base. Windows Azure manages that the application is alwaysaccessible, but local stored information isn’t.4. in place upgrades. Windows Azure role instances can be easily upgraded.Windows Azure organizes instances of your roles into virtual groupingscalled upgrade domains. When you upgrade one or more roles within yourservice in-place, Windows Azure upgrades sets of role instances according tothe upgrade domain to which they belong. Windows Azure upgrades onedomain at a time, stopping the instances running within the upgradedomain, upgrading them, bringing them back online, and moving on to thenext domain. By stopping only the instances running within the currentupgrade domain, Windows Azure ensures that an upgrade takes place withthe least possible impact to the running service. clear environment costs.Azure applications are developed local. It is also possible to run the Azureapplication in this local environment by using emulators.The Windows Azure compute emulator enables you to run, test, debug, andfine-tune your application before you deploy it as a hosted service toWindows Azure. Windows Azure storage emulator provides local instances of the Blob,Queue, and Table services that are available in the Windows Azure. If youare building an application that uses storage services, you can test locally byusing the storage emulator. deploying the Azure application to the Azure platform a package anda configuration file needs to be created. The package contains all files andthe configuration information about the guest OS it needs and otherconfiguration information.Environment configuration.To start developing Azure Applications and to run the application local ordeploy and run it on the Azure platform there are some needs. 1. a version of Visual Studio 2010 2. the Azure SDK 3. Set up a Windows Azure SubscriptionFor the demo’s… you need more (for a windows 7 environment) - TFS Basic - Microsoft Test and Lab Manager - Build Service, controller and agent - Test controller and test agent - Visual Studio SP1 - Feature Pack 1 and 2 - Windows Virtual PC - Image with test agent configured - Powershell - …
  5. 5. ALM 4 Azure The main goal when configuring the technical tool support for your processes is that it must support the same goals. For example when you execute an agile process with your development team. So your team is flexible, efficient and can deliver new functionality to the business against a predefined quality, repeatable and fast. The tools you are using must support and drive this goal. When your process of delivering new functionality takes days to deploy and configure even when you are capable of realizing this functionality, your team still can’t comply the goals. Let’s assume we executing an agile like processes and we have a risk driven mindset. We can write down several goals the tools must support… every team has its own goals, but the ones written down in this list are very common. Every change every technical tool support must support and drive towards these goals. No team is the same. This list of scenarios isn’t a maturity level kind of list, it are more advancement levels of tool support. Not every team wants to go through the knowledge gathering, hardware and software investment necessary to implement a specific scenario. It is definitely a balance between effort, money and profit, benefit.1: Engineering only The developer only scenario is for really small teams, one developer which also does the testing. Most exercises and hands-on labs make use of this scenario.
  6. 6. The engineers create functionality in Visual Studio 2010. The source code is checked in Team Foundation Server. Engineers can make use of work items in TFS but isn’t necessary. Source code is compiled and unit test if any are executed. Other code quality checks can be performed. Implementation checks by the layer diagram are interesting but not necessary. No real quality gates. Engineers deploy the application from Visual Studio by; or creating a package and configuration file or by using the ‘single click’ deployments from within Visual Studio.Soon on: common deployment flow.1. Local development in emulator environments.2. Hybrid of local and Windows Azure, when storage is stable. Different engineers can work against the same data source. The compute emulator is almost similar as the hosted service environment on Azure, but the storage emulator has some big differences compared with the Azure storage.3. Everything in Windows Azure in staging.4. Swap from Staging to Production.• Debugging is not currently supported in Windows Azure, intelitrace is.• Set breakpoints & debug in Local Development Fabric.• Test initially with development storage, but test with Windows Azure storage to test with large volumes of data whilst still keeping your roles local for debugging• Once you are happy with the Worker/Web Roles running locally deploy everything to Staging and run tests in this environment• Once all tests in staging pass, promote everything to production• Worker roles in the “Staging” project are operational – and as such will process messages from queues etc. You should design for this.• Staging also cost money.• Storage costs are way less as compute costs. Test as much local as possible.
  7. 7. The developer only scenario has a lot of benefit and many organizations start with it because its painless. But it’s also faults can slip in easy and time consuming. Pro: Easy installation and configuration Single click deployment from VS2010 Con: No collaboration Easy deployment errors (configuration) What about test and ops2: Developer with manual tester A bit bigger team with a test specific role. Developers have their quality gates within the build. Testers analyses the quality of the system and help the team with risk classifications. It can gets a bit challenging when entering this scenario. Testers and engineers often have different approached, different methodologies and different goals. Now they have to start working together. Work with workitems and supported process templates will help the adoption, providing the team with the benefits the technical support can give them when working together will also help. But mainly it’s a cultural process (see first alm slides) people have to work together and take a shared responsibility for the success of the system. Engineers and testers work together with workitems In Visual Studio Team System 2010 all test roles are provided with clear and better support within the Application Lifecycle. Testers do not use their own separate technical tools anymore, but use integrated tools that are used by architects and developers. Effectively tearing down the wall between developers and testers. But good tools are not enough. Also, a clear separation of roles, tasks, and authorizations are necessary. Finally and most importantly, a structured approach determines how successful you are with your test strategy. For example the role of the tester and the usages of work items in collaboration with engineers. During the planning phase of the project, also called iteration 0 [first blue piece], user stories are collected / brainstormed / defined /… in VSTS this information is collected in the work item type ‘user story’ During the planning of the iteration the team starts to breakdown the user stories [which are selected for that iteration] in implementation tasks. Within VSTS this is done in the implementation tab of the user story Work item. The new 2010 functionality of hierarchies between work items is used for this. More reading: Testing-with-VSTS-2010-and-TMap-Part-01-User-stories.aspx More reading: the-Application-Lifecycle-with-Visual-Studio-2010-Test-Edition.aspx
  8. 8. As in scenario 1, the engineers create functionality in Visual Studio 2010. The source code is checked in Team Foundation Server. The test role is added to the team. They specify and execute manual test cases in Microsoft Test Manager. Same as scenario 1 Same as scenario 1 Tests are executed against the Azure staging environment. Bugs are filled in TFS. Using work items together with engineers is a must, starting with bugs followed by user stories test cases and tasks.Microsoft Test Manager 2010 is for testers what Visual Studio is fordevelopers. That is to say, where Visual Studio is an IDE – an integrateddevelopment environment, Test Manager is an ITE – an integrated testenvironment. This is the interface that a tester will use to create test cases,organize test plans, track test results, and file bugs when defects are found.Test Manager is integrated with Team Foundation Server, and is designed toimprove the productivity of testers. While I am not going to do a deep-diveof all that Test Manager can do, it is important to understand how itintegrates with the Visual Studio Agents to make the most of test caseexecution, and ensure that when a tester files a bug, it is actionable withvery little work on the tester’s part.Test cases are work items with a specific tab where test steps can bedefined. These test steps can only be edited from within MTM.You can create test cases for your manual tests with both action andvalidation test steps by using Visual Studio 2010 Ultimate or Visual StudioTest Professional. You can add test cases to your test plan using MicrosoftTest Manager.More information: to execute tests?The challenge of this scenario is that the staging environment usages costmoney and with some serious testing this will grow every sprint. To balancewhat is tested where these costs can be minimized and kept in acomfortable range.Executing all the tests in the staging environment isn’t an option, tooexpensive. There are two other environments where to execute tests. One isthe environment the tester uses to specify the tests, the other one the buildenvironment. With Azure the challenge of running an Azure applicationoutside of azure is the availability of the compute emulator. While the buildenvironment is a server environment this isn’t the recommended place toexecute manual tests. Only automated, like unit tests, are interesting toexecute on the build.Balancing the testing effort over the environments is challenging. What istested at developer level, with unit tests, isn’t useful to test duringfunctional testing. And what should be tested in the staging environmentwhile all functional and system tests are already executed in the testersenvironment.
  9. 9. Execute tests in test environment by using the compute and CSRun.To execute tests, on a tester’s environment without Visual Studio 2010installed, you need to set and install several things. The easiest way is to letthe environments run their own compute emulator and version of the cloudapplication under test and make use of the azure storage for data.You need to install the Azure SDK and have IIS7.0 available on the testmachine. (2) With CSRun.exe testers can launch the cloud application withintheir own compute storage with the CSX folder and the CSFG file (1).The only challenge left is the port number (3) created by the CSRun.execommand. Microsoft Test Manager record and playback capability will beuseless when this one changes over time, and it will change. Test caseshould be rewritten and executed. This change of URL makes the use ofshared steps (4) in Microsoft test manger a must. You can re-record themwithout breaking record and play back notion of the previous executed testcases. Shared steps will aslo proof there use when you want to execute atest case in staging and production after you have record it in the emulator.Soon on: data from the Azure instance while execution a testA challenge with testing cloud systems is that instances recycle on a regularbasis, it’s not sure that the environment on where the tests were executed isstill the same as when the engineers the bug tries to reproduce and resolve.So, when an engineer wants to find the bug in the system it is not sure hegot access to the log files of the system, for example IIS logs, trace logs etc.When the cloud application has diagnostics enabled you can create a customdiagnostic adapter which collects information and queries the logs for thetest execution time frame and add these logs to the test result for theengineer.See:
  10. 10. See demo WAD MTM Adapter: Microsoft Test Manger capabilities for cloud system testers. Microsoft Test Manager has some very useful features and capabilities for testers of cloud systems. Beside the usages of work items which help the testers to get the same heartbeat as engineers, these capabilities are helping managing the testing effort with, test plans, configurations, suits and the bug workflow. Two features are really useful while testing cloud systems, the shared steps and the diagnostic adapter extensibility. Shared steps make it easy to handle the different environments. First tests are executed on the compute emulator, second on staging and finally the production environment all with different URL’s. The diagnostic adapter extensibility makes is easy to collect environment information for bug solving. Two MTM features aren’t working for cloud systems in relation to MTM. Test Impact analysis and Intellitrace. Intellitrace does work from within Visual Studio as the deployment is configured to use it, but not from the MTM. The developer with manual tester scenario has a lot of benefit. The main benefit is that the system is going to be tested in a well thought manner. That engineers and testers are connected and have the same heartbeat will solve some big project management challenges and will save a lot of time. For test execution the biggest challenge is not to test everything on Azure, but balance it. Only platform verification tests on azure the other system and functional tests on the compute emulator. This needs some configuration on the test environments. Pro: Easy installation and configuration Single click deployment from VS2010 Testers connected, same heartbeat as dev Proven quality Con: Easy deployment errors (configuration) Time consuming (deploy and test) Not repeatable (annoyed testers) Testers connected 3: Developer with manual tester anddeployment build
  11. 11. To drive forward when the team grows or when we need to but some moreeffort in the stability of the system development process, we have to look atthe build process on the build server and the deployment. Making thedeployment of cloud systems repeatable for different environments willmake the whole process more stable. We as a team can deliver functionalityin a faster proven way.When we work agile we want to deliver functionality in a fast flexible way,having an automated process in place which supports this will raise ourquality bar. Same as scenario 1 for engineers and testers, they specify test cases and write source code, test cases are executed on the compute emulator. In collaboration with operations build and deployment scripts are configured. During the build, unit test are run, deployment packages and configurations are made. As a final step of the build the cloud system is deployed to the Azure staging environment. Tests are executed against the Azure staging environment. Bugs are filled in TFS. Using work items together with engineers is a must, starting with bugs followed by user stories test cases and tasks.Automating deploymentManual deploying Azure systems is error-prone, changing configuration filesand connection strings can go wrong, resulting in an instable deploymentwith annoyed testers (system isn’t ready for testing) and users (we can’tshow them anything).There are several different ways to deploy a system to azure, the powershellCmdLets are the easiest to use. The Cmdlets can be downloaded from thislocation reading: the next demo we are using cmdlets the build target and msbuild tocreate packages and deployment as described in You don’t want to configure the automatic deployment on acontinuous integration build. This definitely won’t work, release builds orsprint review builds aren’t run that often can do the deployment.
  12. 12. Soon on: Having automated deployment in place is a boost for the quality of the system delivery process. Although it can be challenging to get the initial configuration right, allot of different technologies must be used, the benefit the team gets from it us high. Pro: Easy installation and configuration No click deployment from build Repeatable ‘proven’ deployments* Testers connected, same heartbeat as dev Proven quality Con: Time consuming testing Application can contain ‘annoying’ bugs Build workflow knowledge necessary Powershell, ccproj tweaks, target files, certificates4: Developer with automatedregression tests, manual tests anddeployment build With automatic deployment in place we can start to configure automatic testing. Although you can run automatic UI test cases from within Visual Studio with the compute emulator on the developer environment, so automatic testing could be in place earlier. But running automatic UI tests on a developer’s environment executed by the developer and with the result collected in Visual Studio is more for bug reproducing and solving than testing, where you say something about the quality of the system. You want the tester to execute them on a test environment and with result in the test plans. Automatic testing will speed up system development, testers and developers have the same hard beat, but the further in a project you get the more regression test cases will need to be executed. This execution of regression tests will take more and more time bringing friction to the same heartbeat mindset.
  13. 13. Engineers write source code and testers specify and execute test cases on the compute emulator. In collaboration with engineers manual test cases are automated with CodedUI and associated with Test Cases. Test case automations are ‘tested’ (dry run) on the developer’s environment. During the build, unit test are run, deployment packages and configurations are made. As a final step of the build the cloud system is deployed to the Azure staging environment. Tests are executed against the Azure staging environment. Bugs are filled in TFS. Using work items together with engineers is a must, starting with bugs followed by user stories test cases and tasks. Associated test case automations are executed from MTM on the compute storage.There are different technologies available to create tests within VisualStudio. Two of them are suitable for the automation of manual tests. Webtests and CodedUI tests. The main difference between them is that CodedUIreally interacts with the UI, it uses IEDOM for automation. Web tests usesHTTP get and post to automate the tests.The CodedUI functionality is strongly connected with Microsoft TestManagers action recordings (created while executing a test case in MTM).CodedUI tests can better be used for functional tests and web test canbetter be used for performance tests and load tests.
  14. 14. Execute tests as soon as possible in the lifecycle.We can divide the different tests technologies in development tests, loadperformance tests, automated UI tests and manual tests, and create a testspecific sub category for the automated and manual tests with thecategories: Functional Testing, Integration Testing, Acceptance Testing andPlatform Testing (Azure specific, answers the question: will it run in thecloud and is the deployment and configuration correct test).Development tests are executed during development and in the CI build.For load and performance we need a full blown environment, this is easywith Azure but we do need a complete feature we need to test available inthe cloud. So, these kinds of tests when part of the Definition of Doneprobably will be executed at the end of a sprint.Automated UI tests are really valuable, automated as soon as possible.These tests can cover just one feature (Functional Testing) and can becreated and executed in the compute emulator as soon as the feature isimplemented. Automated Integration tests are harder to execute within asprint, because they often cover more scenario’s, when integration testing ispart of the Definition of Done, these should be moved to the undone list(see: Agile Test practices with Microsoft Visual Studio 2010 [TMap andScrum] ). Platform testing focus on specific things like if the deployment andconfiguration is correct and does it run correct in the cloud, often these arevery common tests and can be executed within a sprint after the build,automated as soon as possible. Acceptance testing is often done outside the team by the business users,keeping those tests connected with the team is very useful for bug solving,test coverage and automation.Generated CodedUI tests from Microsoft Test Managers action recordingshave all the steps as methods in the CodedUI test.To make the test suitable to be executed in different environments(emulator, staging, production) without having to change the codeconstantly or change the test data parameters settings.Another way to make customizations to the behavior of the codeUI is byusing the UImap editor, which can be found in Feature Pack 2. It helps youtune the search conditions for controls but more important for AzureApplications it helps you extract methods like ‘open application’ out of thexml and code generation to a partial class where you can edit the behaviorfor all test methods who use this method.
  15. 15. Soon on: are multiple ways to execute your test automation effort, you can runthem from and on Microsoft Test Manager, from and on Visual Studio,during the build and on the Build Server or during the build on Test Agentsconfigured with Test Controllers or … many flavors. So, where should youexecute your automated tests to test an azure application, and how shouldyou configure your test infrastructure.One thing to keep in mind when making a decision, testing on Azure costsmoney. You can configure VS2010, MTM or the Build server to execute theautomated tests against an azure deployment. But it’s cheaper to run mostof them against an emulator deployment. For sure you need to balance thisdecision; there are tests which have to run against the Azure deployment, socalled Platform Verification Tests. They verify is the app is running correct inthe cloud, also if the app is configured correct in the cloud. All the other test,functional system etc, can be executed on an emulator deployment.Both test executions (emulator deployment and azure staging / productiondeployment) can be configured to run from VS2010, MTM or the Buildserver, with or without the use of test agents.Each execution platform has it’s pros and cons. For example from VS2010 ismore a developer test, which verifies a bug fix or repro. Test results aren’tcollected in TFS or Test Manager and no connection with linked user stories.It’s an easy way to dry run your tests, specially because the emulators arealready in place an loaded by VS2010. No additional actions need to bemade.Execution on the build server is a kind of strange; execute manual tests on aserver. It also will get challenging when you want to load the emulatorduring the build to run tests against. Also there will be no collection of testresults in Test Manager (no reporting on test points)
  16. 16. The preferred way of execution automated tests is from within MicrosoftTest Manager with associated automation in manual test cases. Anotherbenefit is that you can add scripts and deployment actions which run beforeand after an automatic test.Flavor E: Execution from MTM during Build…Purpose: Part of BVT. Preferred configuration above flavor C. Flavor D and Ecan be configured together.Triggered by: BuildInformation:Configure Test Controller (register it with a project collection )Configure Test Agents on clients (interactive mode, can be the same asMTM)Configure Lab Center in MTM to use test controller and create test ‘agent’environment.Associate CodedUI test with WI Test Case from VS.Create Build task to run TCM or MSTEST task for Test Plan to: Run Test Cases with Automation from the Command Line Using Tcm run distributed over test environments.Tests can be configured to run on different configured environmentsTest Result in MTM and TFSTriggered by buildTest Settings from MTMConHard to configuremaintenance of TCM commands in buildFlavor D: Execution from Microsoft Test ManagerPurpose: Part of Regression Tests (other type of test than BVT).Triggered by: MTM user, right mouse click on test case, runInformation:Configure Test Controller (register it with a project collection )Configure Test Agents on clients (interactive mode)Configure Lab Center in MTM to use test controller and create test ‘agent’physical environment. CodedUI test with WI Test Case from VS. run distributed over test environmentsTest Result in MTMTest Settings from MTMFull control by the testerConTest Controller needs to be configured with a Project Collection (onecontroller per collection)Manually triggered by Tester (or pro)Hard to configureHard to see which test case is automated
  17. 17. Soon on: Pro: No click deployment from build Repeatable ‘proven’ deployments* Testers connected, same heartbeat as dev Proven quality Automated BVT on different Environments Comfortable Acc Testing Done Done Con: Build workflow knowledge necessary Powershell, ccproj tweaks, target files, Certificates Test Infrastructure knowledge necessary A balanced thinking of test automation5: Developer, Automated Tests, Build,Deploy, Acceptance test andOperations The final scenario also added acceptance testing and operations to the process. Acceptance testing often done by the business users and often very disconnected with the team. In the previous scenarios there was a lot of focus on setting up environments so testers won’t get annoyed that the test environment isn’t ready. This is even more important having annoyed business users who can’t test the system isn’t very well for adoption. One big benefit of Azure is that all environments are the same, the same guest OS. So, deployment packages and configuration files in one environment will also work on other environments. Operations needs to provide the business with valuable information how the system is used, so the business can make decisions about the project portfolio. For cloud applications also the monetization of the usages is interesting.
  18. 18. Team development.The Team implements the requested features, specifies test casesand determines operational SLA and usages parameters.On local environments ‘compute and storage emulators’execution of unit tests, dry run CodedUI tests (customize code tohandle different environments) and associate CodedUI tests witha MTLM test case, execute the automated test cases from MTLM(make use of CSRun).As soon as possible emulator storage to the Azure storage, due toenvironment differences and same test- development datastorage for the team. (green line)Engineering and design should also focus on tracing anddiagnostics, this is important during testing and operation of thecloud application.Build, Unit test, Deploy, UItest flow, manual testDuring build, not the CI build but for example a Sprint ReviewBuild deploy the application automatic to the stagingenvironment. First compile, unit test and the creation of thedeployment package and configuration files for differentenvironments.After deployment, automated platform/ staging tests are run.These are CodedUI tests which verify the installation and stabilityon the Azure environment. Test infrastructure can be configuredto run the test from different environments to distribute the testsfor time saving or to tests the azure application with differentclient configurations. The test are executed during the build withcollection of the result in MTLM by using TCM.exe, during the build ‘Build Verification Tests’ and afterdeployment ‘Environment Verification Tests’ are executed, whenthese are successful you could VISPSwap the Cloud applicationfrom staging to production for additional manual testing or forthe sprint review (you can use the Management API for theVISPSwap or CSManage.exe).Release Drop.The package created during the build is reused in another Azuresubscription for security the keys in this environment aren’t usedby testers or developers.“Adatum uses the same package file to deploy to the test andproduction environments, but they do modify the configurationfile. For the aExpense application, the key difference between thecontents of the test and production configuration files is thestorage connection strings. This information is unique to eachWindows Azure subscription and uses randomly generated accesskeys. Only the two key people in the operations department haveaccess to the storage access keys for the production environment,which makes it impossible for anyone else to use productionstorage during testing accidentally.”(from: Moving Applications to the Cloud on the Microsoft Azure™Platform )The business users execute there acceptance tests against thestaging environment of the production subscription. By usingMTLM they can execute manual tests, automated tests, andexploratory tests while still being connected with the TFSrepository. This still gives the capabilities to provide very rich bugreports to the team. When acceptance testing is done, the azureapplication is manually swapped to production.Operations(see this PDF Monitoring and Diagnostic Guidance for Windows®Azure™–hosted Applications )The goal of application monitoring is to operationally answer a
  19. 19. simple question: Is the application running efficiently within its defined SLA parameters and without errors? If the answer to this question is no, then the Operations team needs to be made aware of this condition as soon as possible. Effective placement of monitoring on critical application and system breakpoints will help manage the hosted solutions. This document is intended for development and operations teams that need to monitor applications hosted on the Windows® Azure™ Services Platform, enabling incident, problem, and knowledge management. Same sources are used for the MTLM Azure Diagnostic Data Collectors.Two more interesting scenario’s.Not specific belonging to the ALM 4 Azure story, but it give some thoughts.TFS on Azure announced PDC2010 … ALM Infrastructure on Azure … the opportunities are endless..Thanks for reading all comments are welcome. This is work in progress….