Modern developer workflow


Published on

Describe a path to implement Agile in TFS, I use to discuss with teams and decision makers

Published in: Technology
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Our vision of a Dynamic IT organization is that the lifecycles of PMO, the App Dev and the Ops teams are tightly connected. The use of shared processes and models will enable an agile dynamic organization with continuous improvement.  The PMO can evaluate current and future investments using up-to-date data on projects, resources and services, combined from across IT. Unified schemas have been extended from just resources, tasks and issues to allow tracking of all kinds of assets across IT. A unified data and reporting structure means that data can be combined and analyzed across disciplines. For example, questions can be answered such as, “How many maintenance hours does this service cost us across dev and ops? Which development projects caused most disruption at deployment? What do they have in common? How are they different from successfully deployed projects?” This near-real-time data can feed portfolio planning and tracking success against business intent. Management at all levels will be able to use the insights from real data to drive decisions about app consolidation, maintenance and renewal. Business drivers and IT can together define and refine and track against goals for projects in the form of requirements, KPIs and SLAs. These are reflected in the tools, so that goals can be verified, tested and instrumented during development and monitored during operations. The business customers can track portfolio decisions can be tracked downstream and operational KPIs and quality indicators back to the investment decisions.Program managers, development and operations teams can drive estimation models from a library of historical data across projects. The combined BI warehouses will track actual quality, capacity use and performance across prior projects, aggregatable at any level. This will allow estimation models to draw on historical data and variation quite precisely to drive future project estimation. A common model will allow architects and developers to design (and IT Pros to extend) applications and services for management, deployment, testing, security, and performance. Distributed applications can be packaged and transferred with the metadata necessary for deployment.Levels of the model will transform from the logical design to the specific details of the data center. SLAs and reporting metrics will be captured in the model. The base services of authoring, modeling, storing, implementing and validating are pluggable, so that new sets of tools to design new qualities into applications and services can be created.A common authoring platform and toolset will allow domain specific models to be defined referring to the base application models – for example health model, process model, config model, best practices models, etc. (This is shown in the diagram below.) The basic authoring platform allows a plug in model for third-party tools and new DSLs. For Operations, System Center provides engines that sit on top of the CMDB (which contains these models) and deliver scenarios like deployment, config, monitoring and performance and capacity management. For development, the software factory runtime allows adding new software factories that utilize these extensions for supporting architecture, dev and test activities.An example of using the common authoring and data across the multiple engines is health modeling. An architect can create a health model using Visual Studio designers and a modeling language that has domain specific extensions to SML. The architect can store the health model in the CMDB, validating it against policies reflected there. A developer can implement the health model in VS using a software factory that guides him though the process and helps him validate that the implementation is correct and complete. The developer can easily package the necessary data to hand this aspect of the application over to operations. The systems administrator can then easily consume and extend the health model, hooking it up to his monitoring tools.The model can capture current and alternative future states of the application portfolio and data center to enable impact analysis and architectural what-if scenarios. Application planning can factor both the development and operational aspects of projects with full knowledge the future state of the infrastructure and the ability to specify future state changes against the plans of record. An architect can analyze the effect of desired changes (increased load, changed capabilities, changed policy etc.) against models of the datacenter as it is and as it is expected to be based on planned changes. This is made possible by modeling tools in VS opening and running validation against data from the CMDB. The PMO can simulate the complex effects of changes as they impact and are impacted by schedule and resource dependencies. Tighter integration of workflow and agents will allow automated diagnostics. It will be possible to have lean agents always running on production servers and, based on heuristics, wake up when needed to capture deeper diagnostics without operator intervention. Rules can be used for automatic trace routing to development. Similar automation can make patches available to operations with notification. Virtual machines will enable automated test, staging and deployment. Production configurations can be captured in virtual machines, whether they are managed through virtualization or rolled up from physical servers (P2V). The production configurations can form a test library available to the app dev team. Build automation and test case management will handle the provisioning of the VMs from application models, deployment of the software under test to the VMs, and execution of the tests. The tested VMs can be delivered to operations with application models for a push into Operations.
  • Visual Studio 2012 supports you to create solutions taking advantage of the platforms your users use while integrating with your core platform technologies. With Team Foundation Server/Service you get the best-in-class application lifecycle management tools to empower your team to compelling applications to delight your end-users.
  • Traceability is much wider in scope: it starts from Requirements to finish on deployed Features of production systems.From Wikipedia: «Traceability is the ability to chronologically interrelate uniquely identifiable entities in a way that is verifiable» and for software «refers to the ability to link product documentation requirements back to stakeholders' rationales and forward to corresponding design artifacts, code, and test cases».The slide illustrates a portion of this in the way we homogenously identify process and artifacts.The same unique version identifieris applied to: Builds, Assemblies (DLLs), Installer packages (MSIs), deployed Products (Control Panel\...\Programs and Features).From the build TFS backtrack to source code changes and to Work Items; a Sprint Backlog Item is just one type, think of a Bug: we may trace in which version a bug fix is present.
  • Low level hook
  • Modern developer workflow

    1. 1.
    2. 2. © Aaron Bjork
    3. 3.
    4. 4. Product Backlog Sprint Backlog Sprint Working increment of the software
    5. 5. Source: VersionOne – State of Agile Development Survey Results 2011 ( Two-thirds of respondents work at companies that have adopted agile across 3 or more teams. Scrum or Scrum variants continue to make up more than two-thirds of the methodologies being used, while Kanban has entered the scene this year as a meager player. The only category that saw growth this year was Custom Hybrids (9% up from 5%).
    6. 6. CMDBTFS PMO ALM Ops data, metrics, work products, models, policies, compliance Project Server Datacenter Models, Automated Diagnostics Management Packs, Policy Templates, Capacity Models Tested Configured VMs
    7. 7.
    8. 8. Team Foundation Server Team Foundation Service
    9. 9. .1 Feb 11
    10. 10. Source: InCycle Software
    11. 11.
    12. 12. X .x local repo git pull git push git commit tf get tf checkin git commit git tf pull git tf checkin
    13. 13. B RI B FI RI FI FIB RI Emergency Hotfix Release Branching - Basic (two branch) Read The Free Manual
    14. 14. Source: Jez Humble © 2010 Source: unknown
    15. 15. Red GreenRefactor
    16. 16. Operations Customer Dev
    17. 17. Read The Free Manual
    18. 18. Source: Applied Software Measurement, Capers Jones © 1996
    19. 19. UI Service Unit Source: Mike Cohn © 2010 also Business Layer Business Logic … more, solid, fast brittle expensive time consuming
    20. 20. GUI Test End to end Test Workflow Test Integration Test Business Logic Test Unit Test
    21. 21. MSTest NUnit xUnit .NET MSTest gtest xUnit++ C++ QUnit Jasmine JavaScript MSpec SpecFlow BDD
    22. 22. [TestClass] class TestStockAnalyzer { [TestMethod] public void TestContosoStockPrice() { // Arrange: // Create the fake stockFeed: IStockFeed stockFeed = new StockAnalysis.Fakes.StubIStockFeed() // Generated by Fakes. { // Define each method: // Name is original name + parameter types: GetSharePriceString = (company) => { return 1234; } }; // In the completed application, stockFeed would be a real one: var componentUnderTest = new StockAnalyzer(stockFeed); // Act: int actualValue = componentUnderTest.GetContosoPrice(); // Assert: Assert.AreEqual(1234, actualValue); } // ... }
    23. 23. Step 1: Create your scenario Step 2: Write that scenario in English Step 3: Translate english to Code Step 4: Create Code so it works Step 5: Run your test for feedback while you code Step 6: Approve result so it continues to work Step 7: Change the requirement Step 8: See the new solution Step 9: Re-approve so it continues to work
    24. 24. Test RunnerCoded UI Code/XML RecorderPlayback & API Technology Abstraction Layer (TAL) Low Level Hooks MSAA/UIA Web Third Party
    25. 25. ❶ ❷ ❸ ❹ ❺
    26. 26. Mark each step as passed / failed File an information-rich bug
    27. 27.
    28. 28. (1) Get source (2) Compile projects (3) Copy build to running environment (4) Run deployment scripts for each machine (5) Create environment snapshot (6) Execute automated tests (7) Send test results (8) Publish results to Team Foundation Server VMM Agent Test Agent Test Agent VM Host WebServer (VM) Database Server (VM) VirtualEnvironment
    29. 29. on-production-server.aspx
    30. 30. Closing the Feedback loop
    31. 31. IDC: IT PPM Market Landscape December 2010 Evaluating VS2010
    32. 32.