This document discusses how Application Lifecycle Management (ALM) processes and tools can be used optimally when developing applications for the Microsoft Azure cloud platform. It describes how Visual Studio 2010 and Team Foundation Server 2010 can be used together with Azure to improve productivity, quality, and collaboration during design, development, testing and deployment of cloud applications. Specific ALM scenarios for Azure are presented that range from individual developers to larger teams incorporating development, testing and operations roles.
Computer Aided Applications Design (CAAD) installs a near-real-time method of authoring business software applications. It differs from previous systems and methods (such as Rapid Applications Development, Agile and Workflow) by morphing the role of project manager, business analyst and developer into a single role competency. This is made possible by a new ‘see-no-code’ form of apps design and deployment tooling that can de-skill the life-cycle of applications development formed around a unifying tool-kit and common skills competency.
This 150-page report from PSFK Labs describes 16 trends within 4 macro themes related to the future of work. It identifies 64 examples of these trends being implemented, illustrates 12 future work concepts, and provides reactions from executives, research, and 80 recommended next steps. The report aims to help companies understand how work is changing and evolve accordingly.
Ethos Enterprise Portal Prodoct Features Rev1Bassam AlHakim
The document discusses features of Liferay's web content management, portal, and office collaboration software. It provides over 60 out-of-the-box tools, allows single-click configuration, and integrates with existing systems through its SOA framework. Key benefits include low total cost of ownership, high return on investment, and increased flexibility, productivity, and security for employees, teams, and companies.
This document discusses the challenges of estimating projects for cloud computing applications. It notes that cloud computing is still a new technology with different characteristics than traditional application development. Key challenges for estimation include lack of experience with cloud technologies, new development approaches like agile methodology, and differences in database technologies which are often non-relational. The document provides an overview of cloud computing models and types of cloud application development to provide context on where estimation difficulties may occur.
Our vision is clear. We are here to change the way enterprise software is acquired, hosted and maintained. Today’s technology offerings are evolving to meet changing customer requirements and market demographics. Enterprise software is one of the last technologies prone to long evaluation cycles, huge deployment costs and risky implementations. While most software vendors provide freemium purchase models, ERP and other applications are woefully out-of-date. Business moves too quickly. And, a new generation of professionals views the RFP selection process as a relic from another era. It doesn’t need to be that way.
Once the software is installed, its capabilities should be boundless. While SaaS offerings have changed the financial dynamic of purchasing software it has created compromises in customization and extensibility. Our vision is different. We foresee the need to add operational and performance insight, a vast array of tools to speed implementation and enhance usability, a catalog of complementary software to meet the unique needs of businesses through an app store concept, and a social fabric to truly enrich application functionality. With so much innovation creating new frontiers, why confine your enterprise applications to yesterday’s traditions?
EMA's perspective on enabling development and QA teams with high quality tools that deliver visibility to WMQ messages. Nastel's "freemium" AutoPilot® On-demand for WebSphere MQ gives these teams access to a production-grade MQ diagnostics solution using a web browser, and without impacting production systems.
Introduction to CAAD Codeless Applications Development MethodologyNewton Day Uploads
This is an article I produced previously for Encanvas that maps out the CAAD methodology for codeless software development. It's a comprehensive methodology that demonstrates I think that analysts authoring situational applications still need skills and methods. Will the day come when users do all of this themselves? I'm big on the idea of humanizing IT so I kinda hope so, but realistically we have a long way to go before then.
Computer Aided Applications Design (CAAD) installs a near-real-time method of authoring business software applications. It differs from previous systems and methods (such as Rapid Applications Development, Agile and Workflow) by morphing the role of project manager, business analyst and developer into a single role competency. This is made possible by a new ‘see-no-code’ form of apps design and deployment tooling that can de-skill the life-cycle of applications development formed around a unifying tool-kit and common skills competency.
This 150-page report from PSFK Labs describes 16 trends within 4 macro themes related to the future of work. It identifies 64 examples of these trends being implemented, illustrates 12 future work concepts, and provides reactions from executives, research, and 80 recommended next steps. The report aims to help companies understand how work is changing and evolve accordingly.
Ethos Enterprise Portal Prodoct Features Rev1Bassam AlHakim
The document discusses features of Liferay's web content management, portal, and office collaboration software. It provides over 60 out-of-the-box tools, allows single-click configuration, and integrates with existing systems through its SOA framework. Key benefits include low total cost of ownership, high return on investment, and increased flexibility, productivity, and security for employees, teams, and companies.
This document discusses the challenges of estimating projects for cloud computing applications. It notes that cloud computing is still a new technology with different characteristics than traditional application development. Key challenges for estimation include lack of experience with cloud technologies, new development approaches like agile methodology, and differences in database technologies which are often non-relational. The document provides an overview of cloud computing models and types of cloud application development to provide context on where estimation difficulties may occur.
Our vision is clear. We are here to change the way enterprise software is acquired, hosted and maintained. Today’s technology offerings are evolving to meet changing customer requirements and market demographics. Enterprise software is one of the last technologies prone to long evaluation cycles, huge deployment costs and risky implementations. While most software vendors provide freemium purchase models, ERP and other applications are woefully out-of-date. Business moves too quickly. And, a new generation of professionals views the RFP selection process as a relic from another era. It doesn’t need to be that way.
Once the software is installed, its capabilities should be boundless. While SaaS offerings have changed the financial dynamic of purchasing software it has created compromises in customization and extensibility. Our vision is different. We foresee the need to add operational and performance insight, a vast array of tools to speed implementation and enhance usability, a catalog of complementary software to meet the unique needs of businesses through an app store concept, and a social fabric to truly enrich application functionality. With so much innovation creating new frontiers, why confine your enterprise applications to yesterday’s traditions?
EMA's perspective on enabling development and QA teams with high quality tools that deliver visibility to WMQ messages. Nastel's "freemium" AutoPilot® On-demand for WebSphere MQ gives these teams access to a production-grade MQ diagnostics solution using a web browser, and without impacting production systems.
Introduction to CAAD Codeless Applications Development MethodologyNewton Day Uploads
This is an article I produced previously for Encanvas that maps out the CAAD methodology for codeless software development. It's a comprehensive methodology that demonstrates I think that analysts authoring situational applications still need skills and methods. Will the day come when users do all of this themselves? I'm big on the idea of humanizing IT so I kinda hope so, but realistically we have a long way to go before then.
Microsoft SharePoint 2013 contains significant improvements in key areas such as mobility, productivity, social capabilities, search, and websites. Some of the major new features include improved support for mobile devices like iPad, a more robust social networking platform integrated with Yammer, more powerful search capabilities, and enhanced tools for collaboration and productivity. The update also includes an app store and improved digital asset management for websites. Microsoft aims to position SharePoint 2013 as a leader in enterprise content management and web content management with these updates.
This document discusses integrating user-centered design (UCD) and software engineering (SE) processes. It proposes combining the strengths of both approaches by using a single artifact, like a storyboard, to focus on client requirements, design the user interface, and shorten test cycles. The authors describe their experience developing a hospital administration software using an integrated UCD-SE process that employed a storyboard to capture requirements and verify functionality throughout development. They argue this approach can develop software faster while continually aligning with user needs.
Red Hat SOA: The complete guide provides an introduction to Red Hat's approach to service-oriented architecture (SOA). It explains that Red Hat believes SOA should be simple, open, and affordable. It delivers open source engines, frameworks, stacks, and components to help organizations realize the benefits of SOA. Red Hat subscriptions also provide enterprise-class support while avoiding expensive proprietary licensing fees. The guide outlines how Red Hat works with customers and open source communities to drive innovation and ensure reliable, relevant solutions.
DESIGN OF A MULTI-AGENT SYSTEM ARCHITECTURE FOR THE SCRUM METHODOLOGYijseajournal
The objective of this paper is to design a multi-agent system architecture for the Scrum methodology.
Scrum is an iterative, incremental framework for software development which is flexible, adaptable and
highly productive. An agent is a system situated within and a part of an environment that senses the
environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the
future (Franklin and Graesser, 1996). To our knowledge, this is first attempt to include software agents in
the Scrum framework. Furthermore, our design covers all the stages of software development. Alternative
approaches were only restricted to the analysis and design phases. This Multi-Agent System (MAS)
Architecture for Scrum acts as a design blueprint and a baseline architecture that can be realised into a
physical implementation by using an appropriate agent development framework. The development of an
experimental prototype for the proposed MAS Architecture is in progress. It is expected that this tool will
provide support to the development team who will no longer be expected to report, update and manage
non-core activities daily.
IVCi is a leading provider of collaboration solutions designed to bring people together, no matter where they are located or what technology they have access to. Our mission is to enable our customers to improve their business and their bottom line by unleashing the collective power of their people through collaboration.
Teampark and SharePoint 2010 social collaborationAlbert Hoitingh
The document discusses social collaboration using SharePoint. It defines social as a gradient ranging from implicit networks to fully crowd-sourced collaboration. It highlights how SharePoint 2010 enables social connections through profiles, presence information, notes, insights into organizations, and social networks. It allows for social content like blogs and wikis, social feedback through ratings and tagging, and social search based on relevance and social distance. The document presents TeamPark as a trajectory for SharePoint implementation with four phases - awareness, strategy, implementation, and achieving an active collaboration platform. It notes that in reality, multiple implementations and platforms can be active simultaneously while awareness and strategy phases may be discounted.
Microsoft Windows Azure - SharpCloud Manufacturing Triples Productivity Case ...Microsoft Private Cloud
1) Software startup sharpcloud developed a social networking-based tool for corporate strategy planning but needed global scalability beyond its means.
2) By developing its tool on the Windows Azure platform, sharpcloud gained the infrastructure needed to scale worldwide while only paying for resources used.
3) This allowed sharpcloud to triple productivity, save $500,000 annually, and gain credibility with major customers like Fujitsu.
This document provides a 10-step guide for planning, building, deploying, and managing a service-oriented architecture (SOA). The steps include: thinking big but starting small with initial projects; collaborating with business stakeholders to map and rationalize key business processes; surveying existing technologies and applications; connecting the first services by identifying redundancy and building shared services; choosing and deploying a registry to publish services; and more steps related to governance, security, messaging infrastructure, service management, and orchestration. The guide emphasizes taking an iterative approach focused on business processes and using existing technologies when possible.
This portfolio showcases the interior design work of Temenuzhka Zaharieva, an interior designer based in Bulgaria. She provides services such as furniture design, interior design, and solving space problems for clients. Examples are given of projects before and after her design work through photos in her portfolio.
The document discusses how to work agilely with Visual Studio 2010 by using its process templates. It describes planning a project with release and sprint planning meetings to estimate and prioritize a product backlog of user stories. Sprints add product backlog items to a sprint backlog with tasks and estimates. Daily scrums track progress, and sprint reviews demo delivered value while retrospectives improve agile practices like testing early, continuous integration, and refactoring.
TMap® meets Visual Studio®, München - http://sogeti.de/536.html
The demo's can be found on Youtube:
01: http://www.youtube.com/watch?v=mVaRJes_Qaw
02: http://www.youtube.com/watch?v=3KR5Sxile14
03: http://www.youtube.com/watch?v=39xVnl02tHQ
04: http://www.youtube.com/watch?v=TkdVoXJI8KU
05: http://www.youtube.com/watch?v=V1RiN3EDcw4
This document discusses applying automated application lifecycle management (ALM) practices to Azure cloud development. It outlines 5 scenarios with increasing levels of automation: 1) developers only, 2) adding manual testing, 3) adding automated deployment to a staging environment during builds, 4) adding automated testing during builds, and 5) fully automated testing, building, deployment, and acceptance testing integrated with operations. The document demonstrates configuring automated deployment and testing with Microsoft tools like Visual Studio, Test Manager, and PowerShell for Azure. While increasing automation brings benefits, it also requires more complex build workflows and management of certificates and configurations.
Le cloudvupardesexperts 9pov-curationparloicsimon-clubclouddespartenairesClub Alliances
9 points de vue d'experts sur le cloud. Sélection d'articles issus de la veille et de la curation faire par Loic Simon pour le Compte des membres du Club Cloud des Partenaires
The document discusses the differences between Agile and Scrum methodologies for software development. It states that Agile is a broader framework that contains basic principles adopted by different methods, including Scrum. Scrum is described as a more independent methodology focused on project efficiency. The document then provides more details on the Scrum methodology, describing elements like Sprints (iterative development cycles of 1-4 weeks), daily stand-up meetings, and product backlogs to plan work. It notes that while Scrum is very popular, it can face scaling challenges with very large teams. Dividing teams into multiple Scrum of Scrums is proposed as a potential solution to address those challenges.
Different Methodologies Used By Programming TeamsNicole Gomez
The document discusses different programming team methodologies including:
- System development life cycle (SDLC), which is used for large projects and includes waterfall models. It takes time but ensures high quality.
- Agile methodology, designed for small projects, combines methods for faster development that changes with customer needs.
- Extreme programming allows close communication between developers and customers so the software can change rapidly based on customer feedback.
Overall agile methodologies seem to have advantages over SDLC and extreme programming by allowing faster development that can change with customer desires.
Taloring A Clouded Data Security Life Cycle EssayMarisela Stone
The document discusses the pros and cons of using an agile methodology for software development projects. It begins by stating that there are many different software development methodologies to choose from, each with their own advantages and disadvantages. It goes on to specifically examine the pros and cons of the agile methodology. Some benefits mentioned are its ability to adapt to changing requirements and provide working software frequently. Potential downsides include higher initial costs and more complex planning. The document concludes by noting agile may be best suited for environments where requirements are uncertain or likely to change.
Microsoft SharePoint 2013 contains significant improvements in key areas such as mobility, productivity, social capabilities, search, and websites. Some of the major new features include improved support for mobile devices like iPad, a more robust social networking platform integrated with Yammer, more powerful search capabilities, and enhanced tools for collaboration and productivity. The update also includes an app store and improved digital asset management for websites. Microsoft aims to position SharePoint 2013 as a leader in enterprise content management and web content management with these updates.
This document discusses integrating user-centered design (UCD) and software engineering (SE) processes. It proposes combining the strengths of both approaches by using a single artifact, like a storyboard, to focus on client requirements, design the user interface, and shorten test cycles. The authors describe their experience developing a hospital administration software using an integrated UCD-SE process that employed a storyboard to capture requirements and verify functionality throughout development. They argue this approach can develop software faster while continually aligning with user needs.
Red Hat SOA: The complete guide provides an introduction to Red Hat's approach to service-oriented architecture (SOA). It explains that Red Hat believes SOA should be simple, open, and affordable. It delivers open source engines, frameworks, stacks, and components to help organizations realize the benefits of SOA. Red Hat subscriptions also provide enterprise-class support while avoiding expensive proprietary licensing fees. The guide outlines how Red Hat works with customers and open source communities to drive innovation and ensure reliable, relevant solutions.
DESIGN OF A MULTI-AGENT SYSTEM ARCHITECTURE FOR THE SCRUM METHODOLOGYijseajournal
The objective of this paper is to design a multi-agent system architecture for the Scrum methodology.
Scrum is an iterative, incremental framework for software development which is flexible, adaptable and
highly productive. An agent is a system situated within and a part of an environment that senses the
environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the
future (Franklin and Graesser, 1996). To our knowledge, this is first attempt to include software agents in
the Scrum framework. Furthermore, our design covers all the stages of software development. Alternative
approaches were only restricted to the analysis and design phases. This Multi-Agent System (MAS)
Architecture for Scrum acts as a design blueprint and a baseline architecture that can be realised into a
physical implementation by using an appropriate agent development framework. The development of an
experimental prototype for the proposed MAS Architecture is in progress. It is expected that this tool will
provide support to the development team who will no longer be expected to report, update and manage
non-core activities daily.
IVCi is a leading provider of collaboration solutions designed to bring people together, no matter where they are located or what technology they have access to. Our mission is to enable our customers to improve their business and their bottom line by unleashing the collective power of their people through collaboration.
Teampark and SharePoint 2010 social collaborationAlbert Hoitingh
The document discusses social collaboration using SharePoint. It defines social as a gradient ranging from implicit networks to fully crowd-sourced collaboration. It highlights how SharePoint 2010 enables social connections through profiles, presence information, notes, insights into organizations, and social networks. It allows for social content like blogs and wikis, social feedback through ratings and tagging, and social search based on relevance and social distance. The document presents TeamPark as a trajectory for SharePoint implementation with four phases - awareness, strategy, implementation, and achieving an active collaboration platform. It notes that in reality, multiple implementations and platforms can be active simultaneously while awareness and strategy phases may be discounted.
Microsoft Windows Azure - SharpCloud Manufacturing Triples Productivity Case ...Microsoft Private Cloud
1) Software startup sharpcloud developed a social networking-based tool for corporate strategy planning but needed global scalability beyond its means.
2) By developing its tool on the Windows Azure platform, sharpcloud gained the infrastructure needed to scale worldwide while only paying for resources used.
3) This allowed sharpcloud to triple productivity, save $500,000 annually, and gain credibility with major customers like Fujitsu.
This document provides a 10-step guide for planning, building, deploying, and managing a service-oriented architecture (SOA). The steps include: thinking big but starting small with initial projects; collaborating with business stakeholders to map and rationalize key business processes; surveying existing technologies and applications; connecting the first services by identifying redundancy and building shared services; choosing and deploying a registry to publish services; and more steps related to governance, security, messaging infrastructure, service management, and orchestration. The guide emphasizes taking an iterative approach focused on business processes and using existing technologies when possible.
This portfolio showcases the interior design work of Temenuzhka Zaharieva, an interior designer based in Bulgaria. She provides services such as furniture design, interior design, and solving space problems for clients. Examples are given of projects before and after her design work through photos in her portfolio.
The document discusses how to work agilely with Visual Studio 2010 by using its process templates. It describes planning a project with release and sprint planning meetings to estimate and prioritize a product backlog of user stories. Sprints add product backlog items to a sprint backlog with tasks and estimates. Daily scrums track progress, and sprint reviews demo delivered value while retrospectives improve agile practices like testing early, continuous integration, and refactoring.
TMap® meets Visual Studio®, München - http://sogeti.de/536.html
The demo's can be found on Youtube:
01: http://www.youtube.com/watch?v=mVaRJes_Qaw
02: http://www.youtube.com/watch?v=3KR5Sxile14
03: http://www.youtube.com/watch?v=39xVnl02tHQ
04: http://www.youtube.com/watch?v=TkdVoXJI8KU
05: http://www.youtube.com/watch?v=V1RiN3EDcw4
This document discusses applying automated application lifecycle management (ALM) practices to Azure cloud development. It outlines 5 scenarios with increasing levels of automation: 1) developers only, 2) adding manual testing, 3) adding automated deployment to a staging environment during builds, 4) adding automated testing during builds, and 5) fully automated testing, building, deployment, and acceptance testing integrated with operations. The document demonstrates configuring automated deployment and testing with Microsoft tools like Visual Studio, Test Manager, and PowerShell for Azure. While increasing automation brings benefits, it also requires more complex build workflows and management of certificates and configurations.
Le cloudvupardesexperts 9pov-curationparloicsimon-clubclouddespartenairesClub Alliances
9 points de vue d'experts sur le cloud. Sélection d'articles issus de la veille et de la curation faire par Loic Simon pour le Compte des membres du Club Cloud des Partenaires
The document discusses the differences between Agile and Scrum methodologies for software development. It states that Agile is a broader framework that contains basic principles adopted by different methods, including Scrum. Scrum is described as a more independent methodology focused on project efficiency. The document then provides more details on the Scrum methodology, describing elements like Sprints (iterative development cycles of 1-4 weeks), daily stand-up meetings, and product backlogs to plan work. It notes that while Scrum is very popular, it can face scaling challenges with very large teams. Dividing teams into multiple Scrum of Scrums is proposed as a potential solution to address those challenges.
Different Methodologies Used By Programming TeamsNicole Gomez
The document discusses different programming team methodologies including:
- System development life cycle (SDLC), which is used for large projects and includes waterfall models. It takes time but ensures high quality.
- Agile methodology, designed for small projects, combines methods for faster development that changes with customer needs.
- Extreme programming allows close communication between developers and customers so the software can change rapidly based on customer feedback.
Overall agile methodologies seem to have advantages over SDLC and extreme programming by allowing faster development that can change with customer desires.
Taloring A Clouded Data Security Life Cycle EssayMarisela Stone
The document discusses the pros and cons of using an agile methodology for software development projects. It begins by stating that there are many different software development methodologies to choose from, each with their own advantages and disadvantages. It goes on to specifically examine the pros and cons of the agile methodology. Some benefits mentioned are its ability to adapt to changing requirements and provide working software frequently. Potential downsides include higher initial costs and more complex planning. The document concludes by noting agile may be best suited for environments where requirements are uncertain or likely to change.
This document provides an overview of MLOps (Machine Learning Operations) including:
- What MLOps is and why it is needed to automate and scale machine learning models in production environments.
- Common bottlenecks like siloed teams and tools that limit organizations' machine learning abilities.
- The typical 7 steps in the MLOps process including data preparation, experiments, model validation, deployment, monitoring, and retraining.
- How MLOps software can help organizations unlock business potential by accelerating time to production, improving collaboration, and optimizing model performance and governance over the long term.
Business Need And Current Situation EssayJill Lyons
The document discusses Siltronica's move from the traditional Waterfall methodology to an Agile approach like Scrum for software development. It explains that Agile is preferable in most situations as it allows for faster, incremental delivery of value to stakeholders and greater flexibility to changing business needs. It also briefly mentions that Siltronica began offshoring some IT capabilities to other countries in the early 2000s. The summary is in 3 sentences as requested.
Digital transformation requires organizations to be agile and responsive to changing business needs. Large organizations can adopt agile practices like Microsoft has done by implementing frequent feedback loops and updates. Adopting a hybrid multi-cloud strategy allows organizations to have flexibility, choice, and consistency across environments which provides agility and responsiveness needed for digital transformation. Agile is a journey that all organizations are on to continuously innovate, adapt processes and culture, and deliver value to customers.
Selection And Implementation Of An Enterprise Maturity...Jenny Calhoon
The passage discusses documentation in agile software development processes. While documentation is considered important, traditional agile processes provide little internal documentation and rely heavily on verbal communication. This can lead to lapses in memory over time and make it harder to understand design rationale, especially with team turnover. The main objective of documentation is to instruct those maintaining or upgrading the system about its structure, functionality, operation, and design. Documentation is important for stakeholders like users, testers, and project managers as well.
UCD specialists may initially be concerned about how user experience design fits within agile software development processes. This article examines the experiences of three UCD specialists working on their first agile projects. It compares agile literature to these case studies on topics like justifying the need for UCD, understanding users, UI design, and usability evaluation. The goal is to paint a picture of how UCD and agile practices can successfully coexist within development teams.
Dsg best practice guide for net suite implementation successBootstrap Marketing
This document provides best practices for implementing NetSuite's cloud ERP solution based on the experience of Demand Solutions Group (DSG), the 2013 NetSuite Worldwide Solution Provider Partner of the Year. It outlines five things to implement, five pitfalls to avoid, and five things to consider. The key lessons are that an implementation requires both NetSuite's software and an experienced partner like DSG that takes a business-first approach, has proven methodologies, and can build an effective project team. The goal is to select the right functionality and processes, avoid simply copying existing systems, and optimize the solution over time to meet evolving business needs.
The document discusses agile project management and various agile methodologies. It begins with explaining the need for agile approaches due to changes in how work gets done. It then provides background on agile origins and principles. Specific methodologies like Scrum are outlined, including Scrum roles, events, and processes. Other agile methods like XP, Crystal, FDD are also referenced. The document aims to introduce agile concepts at a high-level.
Integration Of UX Practices And Agile MethodologyAdvance Agility
UX stands for User Experience which is a concept in design that is used by the design teams to create products that are relevant and user for the end customers. UX is an essential part of the process of product making as it focuses at customer satisfaction. This is also important as a positive user experience results in customer retention and also publicity for prospective clients for the brands. It is the responsibility of a UX designer to identify customer requirements through coordination. UX designers aim to make everyday products, technologies or services accessible, simple and user-friendly. The UX design process requires thoughtful planning, in-depth discussions and true feedback.
DevOps is an approach that promotes collaboration between development and IT operations teams. It aims to improve communication and collaboration across the software development lifecycle from design through deployment to support. DevOps values many of the same principles as Agile such as collaboration between cross-functional teams and an emphasis on automating processes to improve delivery of working software. It can be considered an evolution of Agile principles that covers the entire service lifecycle.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document discusses applying agile software development methodology in a dynamic business environment. It begins by defining the traditional software development life cycle and some common development methodologies. It then discusses the principles of agile development, focusing on the Agile Manifesto and Scrum methodology. Some key benefits of agile development discussed include continuous customer feedback, developing products faster through iterative releases, managing change through prioritized backlogs, and continuous risk management through short iterations. Overall, the document argues that agile methods allow for more flexibility and rapid response to changes that are needed in dynamic business environments.
DevOps aims to break down silos between development and operations teams through increased collaboration. It promotes automating processes across the entire application lifecycle. Achieving a DevOps "nirvana" requires a change in organizational culture, including tearing down silos, implementing transparency, and gaining executive support for continuous improvement. Barriers include resistance to changed roles and security concerns, but benefits include reduced costs, faster delivery, and improved collaboration.
In this whitepaper, - Ten Benefits of Integrated ALM - you will find the following ten compelling reasons why an organization must integrate these multiple lifecycle tools for an optimum single repository application development environment.
Platforms and Microservices - Is There a Middle Ground for Engineers and Tech...Dialexa
Your technology strategy is the key to executing successful digital transformation. But if you talk to engineers and strategists, there are opposing views on the best way to leverage technology.
While engineers might push for a pure microservices architecture, strategists may take a step back and consider the long-term implications of that decision on the enterprise. Is there a middle ground?
Our own VP of Engineering, Samer Fallouh, and Head of Technology Strategy, Russell Villemez, discussed this topic to see if there was some middle ground to drive innovation more effectively.
Full write-up: https://by.dialexa.com/platforms-and-microservices-is-there-a-middle-ground-for-engineers-and-tech-strategists
The document provides an overview of agile software development. It defines agile development as a collaborative approach where requirements and solutions evolve through self-organizing cross-functional teams. The document outlines several agile methodologies introduced in the Agile Manifesto in 2001 including Scrum, Extreme Programming, Crystal, FDD, and DSDM. It also discusses lean practices as part of the agile development approach and compares agile to traditional waterfall models. Finally, it covers advantages and disadvantages of the agile model and considerations for when it is best applied.
Teams need to move fast, every action which results in wait time must be minimized to zero. Teams need to move flexible, context changes must be easy adoptable by the team and the system they realize. Using Azure for their Environment and ALM needs helps them fulfill this need.
The document discusses various techniques for continuous feedback in agile development including storyboarding, prototyping, code reviews, user testing, and integrating development and operations. It provides details on using tools like PowerPoint for storyboarding, feedback apps to collect user input, and IntelliTrace to help debug production issues. The document also demonstrates how these techniques can help validate business needs, support teams, and elicit actionable feedback throughout the development lifecycle.
The document discusses setting up test infrastructure for different types of testing. It covers setting up environments for unit testing, functional testing, acceptance testing, and load testing. Specific topics covered include configuring build and test agents, using virtual machines to execute tests in different environments, and using cloud services like Azure to generate load for load testing.
The document discusses test controlling and tracking topics including managing the test process, infrastructure, and products. It provides an overview of different reporting options for software quality including out of the box and customizable SQL Server and Excel reports. Specific reporting capabilities are demonstrated such as viewing test and code coverage results by build, tracking bugs by user story, and creating and resolving bugs. Custom report creation using relational databases and data warehouses is also covered.
During the specification phase of testing, required tests and starting points are specified to prepare for quickly executing tests when developers deliver the test object. The execution phase then obtains insight into quality through agreed upon tests. Different types of testing include acceptance, unit, functional, exploratory, and performance/load testing which validate both business needs and implementation and help both the product and team.
The document discusses test planning and outlines the key phases and activities in a test planning process. It emphasizes that an important part of planning is creating a test plan that is derived from an overall master test plan. The planning phase involves determining what will be tested based on business needs and risks, and managing the test process and different test types. It stresses the importance of coordination across test levels, phases, and types using a master test plan to avoid duplicative testing.
1) Complex software is everywhere and software development is difficult, time-consuming, and expensive.
2) There are often large gaps in software development processes which creates risks like inconsistent processes, lack of productivity reporting, and unpredictable development.
3) Visual Studio 2012 aims to address issues in software development through features like integrated testing tools, storyboarding for early feedback, load testing, and monitoring of applications in production.
Collaboration tools and practices are critical for firms to effectively execute business strategies and create custom applications. Testing early and often through techniques like test-driven development, acceptance test-driven development, continuous integration, refactoring, pair programming and exploratory testing helps ensure collaboration results in high quality software. Adopting agile principles like emergent design, flexibility, and practices such as scrum and feature-driven development enables collaboration magic.
Test Tooling in Visual Studio 2012 an overviewClemens Reijnen
The document discusses different types of software testing including unit tests, functional tests, load tests, exploratory tests, and user acceptance tests. It provides examples of each type of test and explains when they are used in the development process. The document emphasizes that each type of test supports the previous tests and all test types can be supported by Visual Studio 2012.
Agile teams find it hard to get the testing effort in sync with the other development activities. Not only development tests are executed during sprints, all other kind of testing activities are part of done. This session will give guidance how Microsoft Visual Studio ALM tools can support agile teams. How to run sprints and get testing done in a sprint.
TFS11 on Azure allows running real projects with multiple teams using Agile methodologies. It supports team and product backlogs to manage work at both the team and overall project level. Builds can be run either locally or on the Azure cloud build server. Testing can be planned, exploratory, or for bugs tracked in a backlog. Sprint reviews provide opportunities for feedback.
The document discusses features of Team Foundation Service including:
1. Authentication will transition from using Windows Live ID to supporting corporate identities through Active Directory Federation Services and other login options like Google, Yahoo, Facebook.
2. It allows users to collaborate on team projects, add backlog items to define work, plan sprints to select items for a team to work on, and run the sprint to select tasks and execute work.
3. Additional features include source control, build management, testing, reporting and integrating with SharePoint.
Coded UI - Test automation Practices from the FieldClemens Reijnen
CodedUI tests within Visual Studio makes it easy for developers together with tester to create, fully-automated, functional user interface tests. These tests alert the team in an, easy to execute, automated way about regressions. CodedUI tests are easy to create for different UI technologies. But, all kinds of test automation needs an investment. To get a good return on this test automation investment you need to create CodedUI tests in a robust manner which can sustain changes to your application over time.
In this session you will see how maintainable CodedUI tests can be created and how the test infrastructure needs to be configured for efficient execution.
The document provides an agenda for Day 1 of a testing practices course using ALM tools. The day will include introductions, setting up environments, an overview of application lifecycle management and Visual Studio, test planning, test case management, test execution, and bug tracking. Hands-on labs are scheduled throughout the day to provide practical experience with the tools.
This document outlines the agenda for a day 2 MTLM training. The topics covered include recapping test case planning, management, and execution. Creating basic CodedUI tests from test cases and manual recordings will be demonstrated. Customizing the UIMap and code for optimization is also on the agenda. Data driven tests and assertions will be discussed. Troubleshooting CodedUI, common practices and questions will be addressed. Configuring builds to execute CodedUI tests from Visual Studio and Microsoft Test Manager will be shown. Associating automation with test cases and executing from MTM is included. Additional topics are MTLM, Scrum methodologies, lab management, test analytics, and using MTLM with Azure projects.
This guide is an extract from the two and three day course provided by me. It spans the complete testing lifecycle and the tool usages. It will look at the infrastructure implications and testing practices in formal and in agile teams. But, the main focus stays on the usages of the Microsoft Visual Studio testing tools, the knowledge you need to get starting with it, the practices you must have to work with it in real live and how you can bend the tools, with extensibility and normal use to your team needs.
The course and this guide is work in progress. It is not a testing training (I expect you already have testing knowledge), if you need that test process information I refer to the TMap website from Sogeti where you can find tons of information. This training guide follows the TMap testing lifecycle.
BETA work in progress, I add every training new material and tune current material.
This document discusses using Scrum and Visual Studio 2010 for agile software development. It provides an overview of how to plan a Scrum project using Visual Studio templates, including organizing product backlogs, sprints, daily scrums, and sprint reviews. It also lists common agile practices like test-driven development, continuous integration, and refactoring that can be applied.
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Session 1 - Intro to Robotic Process Automation.pdfUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program:
https://bit.ly/Automation_Student_Kickstart
In this session, we shall introduce you to the world of automation, the UiPath Platform, and guide you on how to install and setup UiPath Studio on your Windows PC.
📕 Detailed agenda:
What is RPA? Benefits of RPA?
RPA Applications
The UiPath End-to-End Automation Platform
UiPath Studio CE Installation and Setup
💻 Extra training through UiPath Academy:
Introduction to Automation
UiPath Business Automation Platform
Explore automation development with UiPath Studio
👉 Register here for our upcoming Session 2 on June 20: Introduction to UiPath Studio Fundamentals: https://community.uipath.com/events/details/uipath-lagos-presents-session-2-introduction-to-uipath-studio-fundamentals/
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
Talk Through Sogeti ALM 4 Azure
1. INCREASE PRODUCTIVITY AND
SOFTWARE QUALITY WITH AZURE AND
VS2010 ALM
F a s t e r , b e t t e r w i t h h i g h e r q u a l i t y d e s i g n , d e v e l o p , b u i l d , t e s t a n d d e p l o y Az u r e c l o u d
applications with Application Lifecycle Management.
AZURE PROVIDES NEW OPPERTUNITIES FOR BUSINESSES. MANY ORGANISATIONS ARE STARTING TO
DEVELOP CLOUD APPLICATIONS. MICROSOFT VISUAL STUDIO 2010 AND TEAM FOUNDATION SERVER
2010 ARE THE APPLICATION LIFECYCLE MANAGEMENT TOOLS TO DEVELOP AZURE APPLICATION.
THIS PAPER DISCUSSES, HOW APPLICATION LIFECYCLE MANAGEMENT PROCESSES AND TOOLS CAN BE
USED IN AN OPTIMAL WAY, BY MAKING USE OF THEY KEY CHARACTERISTICS OF THE AZURE PLATFORM
TO RAISE THE PRODUCTIVITY AND QUALITY OF SYSTEMS DEVELOPED FOR THE CLOUD.
ALM 4 Azure
ALM 4 Azure a presentation covering different levels of ALM automation for
Azure cloud services development.
An important notice: the topic covers modern software development tools
and practices which you need to know while developing systems which run
on the Azure cloud (4 Azure). This is something completely different, but
related, than ALM on Azure, where we use the tools out of the cloud (SaaS)
to execute a modern system development. You can use them together ‘on’
and ‘for’ Azure, but this presentation only covers the ‘for’ azure. How do we
need to execute our practices and use our on premise tools for cloud
services?
What is Application Lifecycle Management, why do we want it and what are
the goals we pursue.
What are the specific characteristics of Azure and Cloud Computing. Where
can these characteristics help us to reach the goal and where do they give us
some challenges.
The main topic in the agenda are 5 different ALM 4 Azure scenario’s and
supporting technologies explained.
2. Application Lifecycle Management
All tool vendors and methodologies have their own definition of Application
Lifecycle management.
What can we learn from these definitions?
A wide variety in scope. ITIL is focused on the operational side of ALM, the
Wiki and Forrester descriptions are more focused on the Software
Development Lifecycle [SDLC] and Microsoft takes a bigger scope with
business, development and operations, although the tooling and the
assessment are focused on SDLC. Borland is also talking about a wider scope,
when you look at the RUP like model. But the main pro is their focus on
"many processes and many tools" so it should fit more then one
environment.
Beside this difference in scope, everybody agrees on terms like: measurable,
predictable, traceable, manageable, monitored etc etc... smells like "in
control"
We use this image when defining ALM and it is about "in control" but it is
even more about communication. It is about how the different ALM roles
who are all responsible and accountable for the success of a software
development project communicate in a seamless manner.
When talking about Application Lifecycle Management [ALM] terms like
accountability, governance and compliance are used. All of them refer back
to “working together”, how do we work together during the application
lifecycle. ALM is not about tools, it’s about working together. Working
together seamless and flexible while staying in control, being measurable
and responsible. All the roles in the application lifecycle have a part in this
collaboration effort. Tools can help but isn’t’ the core driver.
There are lots of examples of wrong interpreted business requirements,
miss communication between development and test, applications which
won’t run in production, operations who don’t understand the applications.
All of them resulting in more work, more faults, more costs or even only
costs and no application because the project was unplugged. Most of these
projects faults, these extra costs are a slip in communication.
Having a strategy how people have to collaborate within the lifecycle is one
important piece of Application Lifecycle Management. Most organizational
parts have already some kind of process / methodology in place. Having an
approach how to share information and share ideas from their own role
point of view through the other roles is a key success factor for lowering the
costs and raises the business value of IT investments.
Tools can help with this goal. Having gear in place which supports and
stimulates collaboration is a driver for a successful Application Lifecycle
Management. But without a plan, how people should collaborate and
communicate, tools are useless.
Creating new modes of collaboration supported by technology can only be
done by addressing the human aspect. More specifically, we need to address
some of the worries and obstacles people encounter when collaborating
using technology.
The three most important concerns are:
Trust. Trust is a condition for social interaction. People will only work
with people, companies, tools and information they know they can
trust. Before we can expect collaboration to take off online, there must
be a way for people to get this “trust.” And a topic closely associated
with trust when it refers to people is Identity.
Collaborative culture. If one individual is the greatest collaborator in
the world, he or she is probably not getting anywhere. Only when all
people involved are part of the same collaborative culture will new
levels of creativity and productivity be reached. A collaborative culture
consists of many things, including:
o Collaborative leadership;
o Shared goals;
o Shared model of the truth; and
o Rules or norms.
Reward. Changing the way people work takes effort, so it must be
3. clear for the parties involved what they will gain, at a personal level,
from collaborating in a new way. Surprisingly, a “reward” for successful
collaboration is most often of a non-financial nature.
When working together with work packages, seamless communication
need to address these challenges.
All the different roles in the Application Lifecycle create artifacts, product
also these products need to work together, they need to fit, a single point of
truth. Requirements, designs, the business case, the test cases, the source
files and the operational information… all need to work together as one
consistent product. When one gets out of sync then the involved roles
should get a notification. Tools can help with this.
The Visual Studio 2010 family is made up of a central team server, and a
small selection of client-side tools. The team server—Team Foundation
Server 2010—is the backbone of the application lifecycle management,
providing capabilities for source control management, (SCM), build
automation, work item tracking and reporting. In this release Microsoft
expanded the capabilities of Team Foundation Server by adding a true test
case management system and extended it with Lab Management 2010—a
set of capabilities designed to better integrate both physical and virtual labs
into the development process.
On the client-side for developers, you can choose between Visual Studio
2010 Professional, Premium or Ultimate. For testers and business analysts
there is Test Professional—a new integrated test environment designed with
manual testers in mind.
For those people who participate in the development efforts, but for whom
Visual Studio—the IDE—is not appropriate, including Java developers,
project managers and stakeholders the Team Foundation Server
extensibility model enables alternative interfaces. These include both Team
Explorer—a standalone tool built with the Visual Studio shell—and Team
Web Access. These tools enable anyone to work directly with Team
Foundation Server. And there is cross-product integration capabilities with
Microsoft Office® and Microsoft Expression and SharePoint Server with new
SharePoint dashboard.
Azure
Windows Azure™ is a cloud services operating system that serves as the
development, service hosting and service management environment for the
Windows Azure platform. Windows Azure provides developers with on-
demand compute and storage to host, scale, and manage web applications
on the internet through Microsoft® datacenters.
Windows Azure has several unique characteristics as a platform.
1. Hosted services allow deploying to two identical but independent
environments. The stating environment and the so called production
environment.
When you deploy a service you can choose to deploy to either the staging
environment or the production environment. A service deployed to the
staging environment is assigned a URL with the following format:
{deploymentid}.cloudapp.net. A service deployed to the production
environment is assigned a URL with the following format:
{hostedservicename}.cloudapp.net. The staging environment is useful as a
test bed for your service prior to going live with it. In addition, when you are
ready to go live, it is faster to swap VIPS to move your service to the
production environment than to deploy it directly there.
http://msdn.microsoft.com/en-us/library/gg433118.aspx
2. Guest OS Versions are identical for every instance. In the configuration of
the hosted service the OS version is set and every instance will be built of
this same image. This results in the very hard to accomplish on premise
situation where, test, acceptance and production environments are
identical.
The Windows Azure guest operating system is the operating system that
4. runs on the virtual machines (VMs) that host your service. The guest
operating system is updated monthly. You can choose to upgrade the guest
OS for your service automatically each time an update is released, or you can
perform upgrades manually at a time of your choosing. All role instances
defined by your service will run on the guest operating system version that
you specify.
http://msdn.microsoft.com/en-us/library/ff729422.aspx
3. Don’t assume your state is safe. Instances, VM’s are recycled on a ‘for us’
random base. Windows Azure manages that the application is always
accessible, but local stored information isn’t.
4. in place upgrades. Windows Azure role instances can be easily upgraded.
Windows Azure organizes instances of your roles into virtual groupings
called upgrade domains. When you upgrade one or more roles within your
service in-place, Windows Azure upgrades sets of role instances according to
the upgrade domain to which they belong. Windows Azure upgrades one
domain at a time, stopping the instances running within the upgrade
domain, upgrading them, bringing them back online, and moving on to the
next domain. By stopping only the instances running within the current
upgrade domain, Windows Azure ensures that an upgrade takes place with
the least possible impact to the running service.
http://msdn.microsoft.com/en-us/library/ee517255.aspx
5. clear environment costs.
Azure applications are developed local. It is also possible to run the Azure
application in this local environment by using emulators.
The Windows Azure compute emulator enables you to run, test, debug, and
fine-tune your application before you deploy it as a hosted service to
Windows Azure.
http://msdn.microsoft.com/en-us/library/gg432968.aspx
The Windows Azure storage emulator provides local instances of the Blob,
Queue, and Table services that are available in the Windows Azure. If you
are building an application that uses storage services, you can test locally by
using the storage emulator.
http://msdn.microsoft.com/en-us/library/gg432983.aspx
When deploying the Azure application to the Azure platform a package and
a configuration file needs to be created. The package contains all files and
the configuration information about the guest OS it needs and other
configuration information.
Environment configuration.
To start developing Azure Applications and to run the application local or
deploy and run it on the Azure platform there are some needs.
1. a version of Visual Studio 2010
2. the Azure SDK
http://www.microsoft.com/windowsazure/getstarted
3. Set up a Windows Azure Subscription
For the demo’s… you need more (for a windows 7 environment)
- TFS Basic
- Microsoft Test and Lab Manager
- Build Service, controller and agent
- Test controller and test agent
- Visual Studio SP1
- Feature Pack 1 and 2
- Windows Virtual PC
- Image with test agent configured
- Powershell
- …
5. ALM 4 Azure
The main goal when configuring the technical tool support for your
processes is that it must support the same goals.
For example when you execute an agile process with your development
team. So your team is flexible, efficient and can deliver new functionality to
the business against a predefined quality, repeatable and fast. The tools you
are using must support and drive this goal. When your process of delivering
new functionality takes days to deploy and configure even when you are
capable of realizing this functionality, your team still can’t comply the goals.
Let’s assume we executing an agile like processes and we have a risk driven
mindset. We can write down several goals the tools must support… every
team has its own goals, but the ones written down in this list are very
common.
Every change every technical tool support must support and drive towards
these goals.
No team is the same. This list of scenarios isn’t a maturity level kind of list, it
are more advancement levels of tool support.
Not every team wants to go through the knowledge gathering, hardware
and software investment necessary to implement a specific scenario.
It is definitely a balance between effort, money and profit, benefit.
1: Engineering only
The developer only scenario is for really small teams, one developer which
also does the testing. Most exercises and hands-on labs make use of this
scenario.
6. The engineers create functionality in Visual Studio 2010. The
source code is checked in Team Foundation Server. Engineers can
make use of work items in TFS but isn’t necessary.
Source code is compiled and unit test if any are executed. Other
code quality checks can be performed. Implementation checks by
the layer diagram are interesting but not necessary.
No real quality gates.
Engineers deploy the application from Visual Studio by; or
creating a package and configuration file or by using the ‘single
click’ deployments from within Visual Studio.
Soon on: http://www.youtube.com/user/clemensreijnen
A common deployment flow.
1. Local development in emulator environments.
2. Hybrid of local and Windows Azure, when storage is stable. Different
engineers can work against the same data source. The compute
emulator is almost similar as the hosted service environment on Azure,
but the storage emulator has some big differences compared with the
Azure storage.
3. Everything in Windows Azure in staging.
4. Swap from Staging to Production.
• Debugging is not currently supported in Windows Azure, intelitrace is.
• Set breakpoints & debug in Local Development Fabric.
• Test initially with development storage, but test with Windows Azure
storage to test with large volumes of data whilst still keeping your
roles local for debugging
• Once you are happy with the Worker/Web Roles running locally deploy
everything to Staging and run tests in this environment
• Once all tests in staging pass, promote everything to production
• Worker roles in the “Staging” project are operational – and as such will
process messages from queues etc. You should design for this.
• Staging also cost money.
• Storage costs are way less as compute costs. Test as much local as
possible.
7. The developer only scenario has a lot of benefit and many organizations
start with it because its painless. But it’s also faults can slip in easy and time
consuming.
Pro:
Easy installation and configuration
Single click deployment from VS2010
Con:
No collaboration
Easy deployment errors (configuration)
What about test and ops
2: Developer with manual tester
A bit bigger team with a test specific role. Developers have their quality
gates within the build. Testers analyses the quality of the system and help
the team with risk classifications.
It can gets a bit challenging when entering this scenario. Testers and
engineers often have different approached, different methodologies and
different goals. Now they have to start working together. Work with
workitems and supported process templates will help the adoption,
providing the team with the benefits the technical support can give them
when working together will also help. But mainly it’s a cultural process (see
first alm slides) people have to work together and take a shared
responsibility for the success of the system.
Engineers and testers work together with workitems
In Visual Studio Team System 2010 all test roles are provided with clear and
better support within the Application Lifecycle. Testers do not use their own
separate technical tools anymore, but use integrated tools that are used by
architects and developers. Effectively tearing down the wall between
developers and testers.
But good tools are not enough. Also, a clear separation of roles, tasks, and
authorizations are necessary. Finally and most importantly, a structured
approach determines how successful you are with your test strategy.
For example the role of the tester and the usages of work items in
collaboration with engineers.
During the planning phase of the project, also called iteration 0 [first blue
piece], user stories are collected / brainstormed / defined /… in VSTS this
information is collected in the work item type ‘user story’
During the planning of the iteration the team starts to breakdown the user
stories [which are selected for that iteration] in implementation tasks.
Within VSTS this is done in the implementation tab of the user story Work
item. The new 2010 functionality of hierarchies between work items is used
for this.
More reading: http://www.clemensreijnen.nl/post/2009/09/03/Agile-
Testing-with-VSTS-2010-and-TMap-Part-01-User-stories.aspx
More reading: http://www.clemensreijnen.nl/post/2009/04/21/Testing-in-
the-Application-Lifecycle-with-Visual-Studio-2010-Test-Edition.aspx
8. As in scenario 1, the engineers create functionality in Visual
Studio 2010. The source code is checked in Team Foundation
Server. The test role is added to the team. They specify and
execute manual test cases in Microsoft Test Manager.
Same as scenario 1
Same as scenario 1
Tests are executed against the Azure staging environment. Bugs
are filled in TFS. Using work items together with engineers is a
must, starting with bugs followed by user stories test cases and
tasks.
Microsoft Test Manager 2010 is for testers what Visual Studio is for
developers. That is to say, where Visual Studio is an IDE – an integrated
development environment, Test Manager is an ITE – an integrated test
environment. This is the interface that a tester will use to create test cases,
organize test plans, track test results, and file bugs when defects are found.
Test Manager is integrated with Team Foundation Server, and is designed to
improve the productivity of testers. While I am not going to do a deep-dive
of all that Test Manager can do, it is important to understand how it
integrates with the Visual Studio Agents to make the most of test case
execution, and ensure that when a tester files a bug, it is actionable with
very little work on the tester’s part.
Test cases are work items with a specific tab where test steps can be
defined. These test steps can only be edited from within MTM.
You can create test cases for your manual tests with both action and
validation test steps by using Visual Studio 2010 Ultimate or Visual Studio
Test Professional. You can add test cases to your test plan using Microsoft
Test Manager.
More information:
http://msdn.microsoft.com/en-us/library/dd380712.aspx
http://msdn.microsoft.com/en-us/library/dd286729.aspx
Where to execute tests?
The challenge of this scenario is that the staging environment usages cost
money and with some serious testing this will grow every sprint. To balance
what is tested where these costs can be minimized and kept in a
comfortable range.
Executing all the tests in the staging environment isn’t an option, too
expensive. There are two other environments where to execute tests. One is
the environment the tester uses to specify the tests, the other one the build
environment. With Azure the challenge of running an Azure application
outside of azure is the availability of the compute emulator. While the build
environment is a server environment this isn’t the recommended place to
execute manual tests. Only automated, like unit tests, are interesting to
execute on the build.
Balancing the testing effort over the environments is challenging. What is
tested at developer level, with unit tests, isn’t useful to test during
functional testing. And what should be tested in the staging environment
while all functional and system tests are already executed in the testers
environment.
9. Execute tests in test environment by using the compute and CSRun.
To execute tests, on a tester’s environment without Visual Studio 2010
installed, you need to set and install several things. The easiest way is to let
the environments run their own compute emulator and version of the cloud
application under test and make use of the azure storage for data.
You need to install the Azure SDK and have IIS7.0 available on the test
machine. (2) With CSRun.exe testers can launch the cloud application within
their own compute storage with the CSX folder and the CSFG file (1).
The only challenge left is the port number (3) created by the CSRun.exe
command. Microsoft Test Manager record and playback capability will be
useless when this one changes over time, and it will change. Test case
should be rewritten and executed. This change of URL makes the use of
shared steps (4) in Microsoft test manger a must. You can re-record them
without breaking record and play back notion of the previous executed test
cases. Shared steps will aslo proof there use when you want to execute a
test case in staging and production after you have record it in the emulator.
Soon on: http://www.youtube.com/user/clemensreijnen
Collect data from the Azure instance while execution a test
A challenge with testing cloud systems is that instances recycle on a regular
basis, it’s not sure that the environment on where the tests were executed is
still the same as when the engineers the bug tries to reproduce and resolve.
So, when an engineer wants to find the bug in the system it is not sure he
got access to the log files of the system, for example IIS logs, trace logs etc.
When the cloud application has diagnostics enabled you can create a custom
diagnostic adapter which collects information and queries the logs for the
test execution time frame and add these logs to the test result for the
engineer.
See: http://www.clemensreijnen.nl/post/2011/01/05/MTM-Azure-Data-
Diagnostic-Adapter-a-nice-ALM-for-Cloud-scenario-e280a6.aspx
10. See demo WAD MTM Adapter:
http://www.youtube.com/watch?v=xKxFtfKh6yo
Microsoft Test Manger capabilities for cloud system testers.
Microsoft Test Manager has some very useful features and capabilities for
testers of cloud systems. Beside the usages of work items which help the
testers to get the same heartbeat as engineers, these capabilities are
helping managing the testing effort with, test plans, configurations, suits and
the bug workflow.
Two features are really useful while testing cloud systems, the shared steps
and the diagnostic adapter extensibility. Shared steps make it easy to handle
the different environments. First tests are executed on the compute
emulator, second on staging and finally the production environment all with
different URL’s. The diagnostic adapter extensibility makes is easy to collect
environment information for bug solving.
Two MTM features aren’t working for cloud systems in relation to MTM.
Test Impact analysis and Intellitrace. Intellitrace does work from within
Visual Studio as the deployment is configured to use it, but not from the
MTM.
The developer with manual tester scenario has a lot of benefit. The main
benefit is that the system is going to be tested in a well thought manner.
That engineers and testers are connected and have the same heartbeat will
solve some big project management challenges and will save a lot of time.
For test execution the biggest challenge is not to test everything on Azure,
but balance it. Only platform verification tests on azure the other system
and functional tests on the compute emulator. This needs some
configuration on the test environments.
Pro:
Easy installation and configuration
Single click deployment from VS2010
Testers connected, same heartbeat as dev
Proven quality
Con:
Easy deployment errors (configuration)
Time consuming (deploy and test)
Not repeatable (annoyed testers)
Testers connected
3: Developer with manual tester and
deployment build
11. To drive forward when the team grows or when we need to but some more
effort in the stability of the system development process, we have to look at
the build process on the build server and the deployment. Making the
deployment of cloud systems repeatable for different environments will
make the whole process more stable. We as a team can deliver functionality
in a faster proven way.
When we work agile we want to deliver functionality in a fast flexible way,
having an automated process in place which supports this will raise our
quality bar.
Same as scenario 1 for engineers and testers, they specify test
cases and write source code, test cases are executed on the
compute emulator.
In collaboration with operations build and deployment scripts are
configured.
During the build, unit test are run, deployment packages and
configurations are made.
As a final step of the build the cloud system is deployed to the
Azure staging environment.
Tests are executed against the Azure staging environment. Bugs
are filled in TFS. Using work items together with engineers is a
must, starting with bugs followed by user stories test cases and
tasks.
Automating deployment
Manual deploying Azure systems is error-prone, changing configuration files
and connection strings can go wrong, resulting in an instable deployment
with annoyed testers (system isn’t ready for testing) and users (we can’t
show them anything).
There are several different ways to deploy a system to azure, the powershell
CmdLets are the easiest to use. The Cmdlets can be downloaded from this
location http://archive.msdn.microsoft.com/azurecmdlets
More reading:
http://msdn.microsoft.com/en-us/library/ff803365.aspx
http://scottdensmore.typepad.com/blog/2010/03/azure-deployment-for-
your-build-server.html
http://blogs.msdn.com/b/tomholl/archive/2011/02/23/using-msbuild-to-
deploy-to-multiple-windows-azure-environments.aspx
For the next demo we are using cmdlets the build target and msbuild to
create packages and deployment as described in
http://blogs.msdn.com/b/tomholl/archive/2011/02/23/using-msbuild-to-
deploy-to-multiple-windows-azure-environments.aspx.
Note: You don’t want to configure the automatic deployment on a
continuous integration build. This definitely won’t work, release builds or
sprint review builds aren’t run that often can do the deployment.
12. Soon on: http://www.youtube.com/user/clemensreijnen
Having automated deployment in place is a boost for the quality of the
system delivery process. Although it can be challenging to get the initial
configuration right, allot of different technologies must be used, the benefit
the team gets from it us high.
Pro:
Easy installation and configuration
No click deployment from build
Repeatable ‘proven’ deployments*
Testers connected, same heartbeat as dev
Proven quality
Con:
Time consuming testing
Application can contain ‘annoying’ bugs
Build workflow knowledge necessary
Powershell, ccproj tweaks, target files,
certificates
4: Developer with automated
regression tests, manual tests and
deployment build
With automatic deployment in place we can start to configure automatic
testing.
Although you can run automatic UI test cases from within Visual Studio with
the compute emulator on the developer environment, so automatic testing
could be in place earlier. But running automatic UI tests on a developer’s
environment executed by the developer and with the result collected in
Visual Studio is more for bug reproducing and solving than testing, where
you say something about the quality of the system. You want the tester to
execute them on a test environment and with result in the test plans.
Automatic testing will speed up system development, testers and
developers have the same hard beat, but the further in a project you get the
more regression test cases will need to be executed. This execution of
regression tests will take more and more time bringing friction to the same
heartbeat mindset.
13. Engineers write source code and testers specify and execute test
cases on the compute emulator.
In collaboration with engineers manual test cases are automated
with CodedUI and associated with Test Cases. Test case
automations are ‘tested’ (dry run) on the developer’s
environment.
During the build, unit test are run, deployment packages and
configurations are made.
As a final step of the build the cloud system is deployed to the
Azure staging environment.
Tests are executed against the Azure staging environment. Bugs
are filled in TFS. Using work items together with engineers is a
must, starting with bugs followed by user stories test cases and
tasks.
Associated test case automations are executed from MTM on the
compute storage.
There are different technologies available to create tests within Visual
Studio. Two of them are suitable for the automation of manual tests. Web
tests and CodedUI tests. The main difference between them is that CodedUI
really interacts with the UI, it uses IEDOM for automation. Web tests uses
HTTP get and post to automate the tests.
The CodedUI functionality is strongly connected with Microsoft Test
Managers action recordings (created while executing a test case in MTM).
CodedUI tests can better be used for functional tests and web test can
better be used for performance tests and load tests.
14. Execute tests as soon as possible in the lifecycle.
We can divide the different tests technologies in development tests, load
performance tests, automated UI tests and manual tests, and create a test
specific sub category for the automated and manual tests with the
categories: Functional Testing, Integration Testing, Acceptance Testing and
Platform Testing (Azure specific, answers the question: will it run in the
cloud and is the deployment and configuration correct test).
Development tests are executed during development and in the CI build.
For load and performance we need a full blown environment, this is easy
with Azure but we do need a complete feature we need to test available in
the cloud. So, these kinds of tests when part of the Definition of Done
probably will be executed at the end of a sprint.
Automated UI tests are really valuable, automated as soon as possible.
These tests can cover just one feature (Functional Testing) and can be
created and executed in the compute emulator as soon as the feature is
implemented. Automated Integration tests are harder to execute within a
sprint, because they often cover more scenario’s, when integration testing is
part of the Definition of Done, these should be moved to the undone list
(see: Agile Test practices with Microsoft Visual Studio 2010 [TMap and
Scrum] ). Platform testing focus on specific things like if the deployment and
configuration is correct and does it run correct in the cloud, often these are
very common tests and can be executed within a sprint after the build,
automated as soon as possible.
Acceptance testing is often done outside the team by the business users,
keeping those tests connected with the team is very useful for bug solving,
test coverage and automation.
Generated CodedUI tests from Microsoft Test Managers action recordings
have all the steps as methods in the CodedUI test.
To make the test suitable to be executed in different environments
(emulator, staging, production) without having to change the code
constantly or change the test data parameters settings.
Another way to make customizations to the behavior of the codeUI is by
using the UImap editor, which can be found in Feature Pack 2. It helps you
tune the search conditions for controls but more important for Azure
Applications it helps you extract methods like ‘open application’ out of the
xml and code generation to a partial class where you can edit the behavior
for all test methods who use this method.
15. Soon on: http://www.youtube.com/user/clemensreijnen
There are multiple ways to execute your test automation effort, you can run
them from and on Microsoft Test Manager, from and on Visual Studio,
during the build and on the Build Server or during the build on Test Agents
configured with Test Controllers or … many flavors. So, where should you
execute your automated tests to test an azure application, and how should
you configure your test infrastructure.
One thing to keep in mind when making a decision, testing on Azure costs
money. You can configure VS2010, MTM or the Build server to execute the
automated tests against an azure deployment. But it’s cheaper to run most
of them against an emulator deployment. For sure you need to balance this
decision; there are tests which have to run against the Azure deployment, so
called Platform Verification Tests. They verify is the app is running correct in
the cloud, also if the app is configured correct in the cloud. All the other test,
functional system etc, can be executed on an emulator deployment.
Both test executions (emulator deployment and azure staging / production
deployment) can be configured to run from VS2010, MTM or the Build
server, with or without the use of test agents.
Each execution platform has it’s pros and cons. For example from VS2010 is
more a developer test, which verifies a bug fix or repro. Test results aren’t
collected in TFS or Test Manager and no connection with linked user stories.
It’s an easy way to dry run your tests, specially because the emulators are
already in place an loaded by VS2010. No additional actions need to be
made.
Execution on the build server is a kind of strange; execute manual tests on a
server. It also will get challenging when you want to load the emulator
during the build to run tests against. Also there will be no collection of test
results in Test Manager (no reporting on test points)
http://www.clemensreijnen.nl/post/2011/02/21/Running-Automated-Tests-
on-Physical-Environments-the-different-flavorse280a6.aspx
16. The preferred way of execution automated tests is from within Microsoft
Test Manager with associated automation in manual test cases. Another
benefit is that you can add scripts and deployment actions which run before
and after an automatic test.
Flavor E: Execution from MTM during Build…
Purpose: Part of BVT. Preferred configuration above flavor C. Flavor D and E
can be configured together.
Triggered by: Build
Information:
Configure Test Controller (register it with a project collection )
Configure Test Agents on clients (interactive mode, can be the same as
MTM)
Configure Lab Center in MTM to use test controller and create test ‘agent’
environment.
Associate CodedUI test with WI Test Case from VS.
Create Build task to run TCM or MSTEST task for Test Plan
http://blogs.microsoft.co.il/blogs/shair/archive/2010/10/30/how-to-run-
coded-ui-tests-from-command-line.aspx
How to: Run Test Cases with Automation from the Command Line Using Tcm
http://msdn.microsoft.com/en-us/library/dd465192.aspx
Pro
Test run distributed over test environments.
Tests can be configured to run on different configured environments
Test Result in MTM and TFS
Triggered by build
Test Settings from MTM
Con
Hard to configure
maintenance of TCM commands in build
Flavor D: Execution from Microsoft Test Manager
Purpose: Part of Regression Tests (other type of test than BVT).
Triggered by: MTM user, right mouse click on test case, run
Information:
Configure Test Controller (register it with a project collection )
Configure Test Agents on clients (interactive mode)
Configure Lab Center in MTM to use test controller and create test ‘agent’
physical environment.
http://msdn.microsoft.com/en-us/library/ee390842.aspx
http://msdn.microsoft.com/en-us/library/dd293551.aspx
Associate CodedUI test with WI Test Case from VS.
http://www.richard-banks.org/2010/11/how-to-use-codedui-tests-watin-
and-mtm.html
Pro
Test run distributed over test environments
Test Result in MTM
Test Settings from MTM
Full control by the tester
Con
Test Controller needs to be configured with a Project Collection (one
controller per collection)
Manually triggered by Tester (or pro)
Hard to configure
Hard to see which test case is automated
17. Soon on: http://www.youtube.com/user/clemensreijnen
Pro:
No click deployment from build
Repeatable ‘proven’ deployments*
Testers connected, same heartbeat as dev
Proven quality
Automated BVT on different Environments
Comfortable Acc Testing
Done Done
Con:
Build workflow knowledge necessary
Powershell, ccproj tweaks, target files,
Certificates
Test Infrastructure knowledge necessary
A balanced thinking of test automation
5: Developer, Automated Tests, Build,
Deploy, Acceptance test and
Operations
The final scenario also added acceptance testing and operations to the
process.
Acceptance testing often done by the business users and often very
disconnected with the team. In the previous scenarios there was a lot of
focus on setting up environments so testers won’t get annoyed that the test
environment isn’t ready. This is even more important having annoyed
business users who can’t test the system isn’t very well for adoption. One
big benefit of Azure is that all environments are the same, the same guest
OS. So, deployment packages and configuration files in one environment will
also work on other environments.
Operations needs to provide the business with valuable information how the
system is used, so the business can make decisions about the project
portfolio. For cloud applications also the monetization of the usages is
interesting.
18. Team development.
The Team implements the requested features, specifies test cases
and determines operational SLA and usages parameters.
On local environments ‘compute and storage emulators’
execution of unit tests, dry run CodedUI tests (customize code to
handle different environments) and associate CodedUI tests with
a MTLM test case, execute the automated test cases from MTLM
(make use of CSRun).
As soon as possible emulator storage to the Azure storage, due to
environment differences and same test- development data
storage for the team. (green line)
Engineering and design should also focus on tracing and
diagnostics, this is important during testing and operation of the
cloud application.
Build, Unit test, Deploy, UItest flow, manual test
During build, not the CI build but for example a Sprint Review
Build deploy the application automatic to the staging
environment. First compile, unit test and the creation of the
deployment package and configuration files for different
environments.
After deployment, automated platform/ staging tests are run.
These are CodedUI tests which verify the installation and stability
on the Azure environment. Test infrastructure can be configured
to run the test from different environments to distribute the tests
for time saving or to tests the azure application with different
client configurations. The test are executed during the build with
collection of the result in MTLM by using TCM.exe
http://msdn.microsoft.com/en-us/library/dd465192.aspx
So, during the build ‘Build Verification Tests’ and after
deployment ‘Environment Verification Tests’ are executed, when
these are successful you could VISPSwap the Cloud application
from staging to production for additional manual testing or for
the sprint review (you can use the Management API for the
VISPSwap or CSManage.exe).
Release Drop.
The package created during the build is reused in another Azure
subscription for security the keys in this environment aren’t used
by testers or developers.
“Adatum uses the same package file to deploy to the test and
production environments, but they do modify the configuration
file. For the aExpense application, the key difference between the
contents of the test and production configuration files is the
storage connection strings. This information is unique to each
Windows Azure subscription and uses randomly generated access
keys. Only the two key people in the operations department have
access to the storage access keys for the production environment,
which makes it impossible for anyone else to use production
storage during testing accidentally.”
(from: Moving Applications to the Cloud on the Microsoft Azure™
Platform http://oreilly.com/catalog/0790145308795/ )
The business users execute there acceptance tests against the
staging environment of the production subscription. By using
MTLM they can execute manual tests, automated tests, and
exploratory tests while still being connected with the TFS
repository. This still gives the capabilities to provide very rich bug
reports to the team. When acceptance testing is done, the azure
application is manually swapped to production.
Operations
(see this PDF Monitoring and Diagnostic Guidance for Windows®
Azure™–hosted Applications )
The goal of application monitoring is to operationally answer a
19. simple question: Is the application running efficiently within its
defined SLA parameters and without errors? If the answer to this
question is no, then the Operations team needs to be made
aware of this condition as soon as possible. Effective placement of
monitoring on critical application and system breakpoints will
help manage the hosted solutions. This document is intended for
development and operations teams that need to monitor
applications hosted on the Windows® Azure™ Services Platform,
enabling incident, problem, and knowledge management.
Same sources are used for the MTLM Azure Diagnostic Data
Collectors.
Two more interesting scenario’s.
Not specific belonging to the ALM 4 Azure story, but it give some thoughts.
TFS on Azure announced PDC2010 …
http://blogs.msdn.com/b/bharry/archive/2010/10/28/tfs-on-windows-
azure-at-the-pdc.aspx
Other ALM Infrastructure on Azure … the opportunities are endless..
Thanks for reading all comments are welcome. This is work in progress….