APPLICATION PERFORMANCE MANAGEMENT/QUALITY MANAGEMENT SYSTEM (APM/QMS) in PSHRC Eng. Mohammad Al-Nofaie Network Performanc...
APPLICATION PERFORMANCE MANAGEMENT/QUALITY MANAGEMENT SYSTEM (APM/QMS) in PSHRC <ul><li>It is a  continuous  improvement o...
VISION To improve the quality of network performance for  through advanced communication services to  authorized users in ...
<ul><li>Stages of Project Development </li></ul><ul><li>Stage 1  :   Initiating and Preparation </li></ul><ul><li>Stage 2 ...
Stage 1  :   Initiating and Preparation Description <ul><li>In application performance management, three common dynamics p...
Stage 1  :   Initiating and Preparation Stock Holders <ul><li>Increase Effectiveness of CCIS Department Operations Quickly...
Stage 1  :   Initiating and Preparation Application Architecture The goal of successful application architecture is to exp...
Stage 1  :   Initiating and Preparation Application Multi-tier Multi-tier applications enable enterprises to share informa...
Stage 1  :   Initiating and Preparation Service Level Agreement An SLA sets the expectations between the consumer and prov...
Stage 1  :   Initiating and Preparation Service Level Objective Service Level Objectives (SLOs) are a key element of a Ser...
Stage 1  :   Initiating and Preparation External Issues Affecting Performance
Cisco Service-Oriented Network Architecture (SONA) Framework Stage 1  :   Initiating and Preparation Framework
Cisco Service-Oriented Network Architecture (SONA) Framework Application Layer This layer contains the business applicatio...
This layer is a full architecture of several network technologies working together to create functionality that can be use...
Security Services Ensures that all aspects of the network together to secured pervasively from the edge to the core of the...
Mobility Services Allows users to access network resources regardless of their physical location but includes more than si...
Storage Services Provides distributed and virtual storage across the infrastructure enabling additional services such as b...
Voice and Collaboration Services Delivers the foundation by which voice and video streaming can be carried across the netw...
Compute Services Connects and virtualizes compute resources based on the application helping to provide cost effective bus...
Identity Services Maps resources and policies to the user and device for use both by security services and to be used to c...
This layer is where all the IT resources are interconnected across a converged network foundation designed as a complete a...
The network group is responsible in  analyzing and generating reports about the current network infrastructure After the a...
<ul><li>Submit all the current resource status of the network (bandwidths, storage, etc.) </li></ul><ul><li>Identify the c...
Existing and future applications must be declared Current network status is identified Remaining network resources must be...
To identify and understand the current network environment and possible impact during application deployment To understand...
Business / Collaboration Application Team Network Interactive Service Team Stage 2  :   Planning Team Structure
<ul><li>Measure the cost of load that the application might used versus the resource allocation that currently exist </li>...
Current load of the network and all resource consumption of the application must be declared Must identify all application...
To identify the major requirements of the application to be tested, both hardware and software To identify the needs for h...
To identify the applications performance To identify the network resource consumption To identify the integrity of the con...
Stage 3   :   Implementation 4 Stages of Implementation IT Guru IT GURU, does the following: 1)  Diagnose  – Visualize net...
Stage 3   :   Implementation 1)  Capture (Application Traces)  – Capture a ‘finger print’ of the application transaction a...
Stage 3   :   Implementation 3 Provides real-time performance analysis of complex applications by monitoring system and ap...
Stage 3   :   Implementation 4 An application service level monitoring solution that provides visibility into interdepende...
Stage 3   :   Implementation Assessing Application Networkability Workflow
Stage 3   :   Implementation Methodology 1 The process of capturing application data that accurately reflects the behavior...
3 The network impact can be studied by changing network parameters (bandwidth, latency, packet loss, link utilization, TCP...
5 The following are the steps in simulating the application: Auto-Create the Basic Topology The model and configuration of...
Tune Protocols and Set Parameters The parameters of the client and servers are typically the most important. In general, d...
Run the Initial Simulation and Get Results Choosing a Simulation Duration and Selecting Statistics obtains the measurement...
6 To model the effect of other users and traffic sources, be sure to create appropriate load on the various components in ...
Ping Approach Instead of physically moving the client to different locations, “Ping Command” can determines the roundtrip ...
Stage 3   :   Implementation Team Structure Collaboration Application Group Business Application Group
<ul><li>Maintain the network connectivity </li></ul><ul><li>Record the applications network resource consumption </li></ul...
Network and application’s performance are measured Failure events are recorded Contingency plans are performed End-users a...
To identify the major requirements of the application to be tested, both hardware and software To identify the needs for h...
Business / Collaboration Application Team Network Interactive Service Team Stage 4   :   Testing Environment Team Structure
<ul><li>Identify the application’s network traffic requirement </li></ul><ul><li>Identify the maximum resource requirement...
All software, hardware and network performance are identified Application integrity and connectivity are measured Connecti...
To determine the impact of the application into the live network infrastructure To verify the end result of the applicatio...
Stage 5  :   Analyzing Baseline Scenario Accessing Application Impact
Stage 5  :   Analyzing Baseline Scenario Methodology 1 The process of capturing application data that accurately reflects ...
<ul><li>Perform Detail Application Analysis </li></ul><ul><li>Detailed analysis on application performance can be obtain b...
Provide Diagnoses and Statistics The diagnosis and statistics include the delays on each tier, the packer sizes, protocol ...
Provide Diagnoses and Statistics (continued…) Propagation delay bottleneck  is the time taken by the packets to propagate ...
Provide Diagnoses and Statistics (continued…) TCP windowing bottleneck  is the bandwidth-delay product used by the TCP con...
Recommendations The implications of each diagnosis and our recommendations for correcting the problem are described below:...
Recommendations (continued…) Propagation delay  – Move the affected tiers closer together. Use intermediate devices that a...
Recommendations (continued…) TCP windowing  – Use larger TCP send and receive windows. These windows should be greater tha...
Stage 5  :   Analyzing Baseline Scenario Team Structure Collaboration Application Group Business Application Group
<ul><li>Identify the application’s network traffic requirement </li></ul><ul><li>Identify the maximum resource requirement...
Network allocation resources are identified Application’s integrity, data connectivity and reporting performance are measu...
Stage 5  :   Analyzing Baseline Scenario Methodology 1 The process of capturing application data that accurately reflects ...
<ul><li>Perform Detail Application Analysis </li></ul><ul><li>Detailed analysis on application performance can be obtain b...
Provide Diagnoses and Statistics The diagnosis and statistics include the delays on each tier, the packer sizes, protocol ...
Provide Diagnoses and Statistics (continued…) Propagation delay bottleneck  is the time taken by the packets to propagate ...
Provide Diagnoses and Statistics (continued…) TCP windowing bottleneck  is the bandwidth-delay product used by the TCP con...
Stage 5  :   Analyzing Baseline Scenario Team Structure Collaboration Application Group Business Application Group
<ul><li>Identify the application’s network traffic requirement </li></ul><ul><li>Identify the maximum resource requirement...
Network allocation resources are identified Application’s integrity, data connectivity and reporting performance are measu...
To identify the deployment process of the application to the live servers To identify the actual impact of the application...
Stage 6  :   Go Live Scenario Team Structure Collaboration Application Group Business Application Group
<ul><li>Identify the application’s network traffic requirement </li></ul><ul><li>Identify the maximum resource requirement...
Recorded result of application and network performance upon deployment Result analysis of the hardware performance Identif...
To finalize end result and present the output to  Top Management To document the projects related issues including softwar...
Stage 7  :   Project Closing Team Structure Collaboration Application Group Business Application Group
<ul><li>Present the end result of the network’s performance after deployment </li></ul><ul><li>Documented the existing net...
Documentation of the project must be present Project Review Project turnover to CCIS from vendor Stage 7  :   Project Clos...
Upcoming SlideShare
Loading in …5
×

Apq Qms Project Plan

949 views

Published on

InfTo improve the quality of network performance through advanced communication
services and authorized users in equal access to state-of-the-art technology.

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
949
On SlideShare
0
From Embeds
0
Number of Embeds
27
Actions
Shares
0
Downloads
1
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Apq Qms Project Plan

  1. 1. APPLICATION PERFORMANCE MANAGEMENT/QUALITY MANAGEMENT SYSTEM (APM/QMS) in PSHRC Eng. Mohammad Al-Nofaie Network Performance Engineer Center of Computer & Info. Systems (CCIS), PSHRC
  2. 2. APPLICATION PERFORMANCE MANAGEMENT/QUALITY MANAGEMENT SYSTEM (APM/QMS) in PSHRC <ul><li>It is a continuous improvement of IT processes. Aside from Capacity Planning </li></ul><ul><li>and Infrastructure Troubleshooting, it also focus Business Application </li></ul><ul><li>Performance Engineering. With this solutions, CCIS Department can get the </li></ul><ul><li>most relevant data to reduce the Mean Time to Recovery (MTTR) of application </li></ul><ul><li>and network problems. </li></ul><ul><li>Database Storage </li></ul><ul><li>Improve end user response time by proactively monitoring, analyzing, and tuning database and storage applications. </li></ul><ul><li>Web and Middleware </li></ul><ul><li>Improve end user response time by proactively monitoring, analyzing, and tuning web and middleware applications. </li></ul><ul><li>Application Reliability/(HIS) </li></ul><ul><li>Manage the health enterprise applications and take a proactive approach to application availability by monitoring critical business transactions and service levels. </li></ul><ul><li>Network </li></ul><ul><li>Improve end user response time by proactively monitoring, analyzing, and bottlenecks network performance and reducing downtime. </li></ul>
  3. 3. VISION To improve the quality of network performance for through advanced communication services to authorized users in equal access to state-of-the-art technology MISSION To provide authorized users the highest quality and technologically advanced end-user services
  4. 4. <ul><li>Stages of Project Development </li></ul><ul><li>Stage 1 : Initiating and Preparation </li></ul><ul><li>Stage 2 : Planning </li></ul><ul><li>Stage 3 : Implementation </li></ul><ul><li>Stage 4 : Testing Environment </li></ul><ul><li>Stage 5 : Analyzing Baseline Scenario </li></ul><ul><li>Stage 6 : Go Live Scenario </li></ul><ul><li>Stage 7 : Project Closing </li></ul>
  5. 5.
  6. 6. Stage 1 : Initiating and Preparation Description <ul><li>In application performance management, three common dynamics play within the organization: </li></ul><ul><ul><li>Deploying a new application and wish to understand the impact of the application on the network (such as SAP or ECRM applications) and the changes required on the network to support the new application. </li></ul></ul><ul><ul><li>Deploying real-time convergent applications (such as voice and video over IP) and need to manage the performance of the applications. </li></ul></ul><ul><ul><li>The applications that are currently deployed on your network are performing poorly. </li></ul></ul><ul><li>Each of these dynamics can create a situation in which robust application performance is the key criterion for the overall business. Today’s applications are what drive today’s businesses. The ability to manage the IT infrastructure to deliver quality application performance, introduce applications, and resolve application performance problems is a focus that is most often overlooked. </li></ul>
  7. 7. Stage 1 : Initiating and Preparation Stock Holders <ul><li>Increase Effectiveness of CCIS Department Operations Quickly identify the ‘root-cause’ of poor performance with a unique view of the interactions between applications and their deployment environments. </li></ul><ul><ul><li>Avoid Costly Re-Configuration and Re-Programming after Deployment Validate changes in a virtual performance before spending time and money in the wrong places. </li></ul></ul><ul><ul><li>Reduce Risk of Application Deployment Failures Bridging the gap between developers and teams that manage the network and server performance, increasing uptime and reducing ‘finger pointing’. </li></ul></ul><ul><ul><li>Ensure ‘End-user’ satisfaction with Optimized Application Performance Detailed analysis enables service levels to be met and realistic expectations to be set. </li></ul></ul>Return of Investment of this Application Performance Management Project 1 2 3 4
  8. 8. Stage 1 : Initiating and Preparation Application Architecture The goal of successful application architecture is to explore the entire business and define an application and infrastructure framework that has the potential of delivering workable solutions for the foreseeable future. The key is to identify the business aspects that are core and the others that might change significantly. This frames the risk when looking at the specific areas to support. With a solid business perspective, current technologies and future science can be assessed. Although new technologies might stimulate new business, technologies are the tools, not the goal - business is the key. The results should support business growth or shrinkage, and replacement of application and technology components over time. Change is a constant - the architecture's aim is not just to withstand it but also to enable it. The exact structure is not important but the focus must be correct and the framework must be appropriately flexible to evolve. The enterprise architecture will also provide the framing and guidance for the next levels of architecture and design.
  9. 9. Stage 1 : Initiating and Preparation Application Multi-tier Multi-tier applications enable enterprises to share information with, and permit collaboration among, employees, customers, and business partners. A typical multi-tier application has three tiers: a front end that performs authentication and serves as an interface to the user, a middle tier that handles authorization and business logic, and a back end that acts as a store for information.
  10. 10. Stage 1 : Initiating and Preparation Service Level Agreement An SLA sets the expectations between the consumer and provider. It helps define the relationship between the two parties. It is the cornerstone of how the service provider sets and maintains commitments to the service consumer. A good SLA addresses five key aspects: In the definition of an SLA, realistic and measurable commitments are important. Performing as promised is important, but swift and well communicated resolution of issues is even more important. The challenge for a new service and its associated SLA is that there is a direct relationship between the architecture and what the maximum levels of availability are. Thus, an SLA cannot be created in a vacuum. An SLA must be defined with the infrastructure in mind. An exponential relationship exists between the levels of availability and the related cost. Some customers need higher levels of availability and are willing to pay more. Therefore, having different SLAs with different associated costs is a common approach. <ul><ul><ul><li>What the provider is promising. </li></ul></ul></ul><ul><ul><ul><li>How the provider will deliver on those promises. </li></ul></ul></ul><ul><ul><ul><li>Who will measure delivery, and how. </li></ul></ul></ul><ul><ul><ul><li>What happens if the provider fails to deliver as promised? </li></ul></ul></ul><ul><ul><ul><li>How the SLA will change over time. </li></ul></ul></ul>
  11. 11. Stage 1 : Initiating and Preparation Service Level Objective Service Level Objectives (SLOs) are a key element of a Service Level Agreement between a Service Provider and a customer. SLOs are agreed as a means of measuring the performance of the Service Provider and are outlined as a way of avoiding disputes between the two parties based on misunderstanding. The SLO may be composed of one or more quality-of-service measurements that are combined to produce the SLO achievement value. As an example, an availability SLO may depend on multiple components, each of which may have a QOS availability measurement. The combination of QOS measures into a SLO achievement value will depend on the nature and architecture of the service. SLO must be: Attainable, Measurable, Understandable, Meaningful, Controllable, Affordable, Mutually acceptable Service Level Commitment Between 0 to .2% application and network error free, Response procedures to system failures within 5 minutes of failure notification. Have a Disaster Recovery Plan (DRP), automated monitoring of server availability and carrying out daily back-ups of critical data. Confidentially and Security of Data.
  12. 12. Stage 1 : Initiating and Preparation External Issues Affecting Performance
  13. 13. Cisco Service-Oriented Network Architecture (SONA) Framework Stage 1 : Initiating and Preparation Framework
  14. 14. Cisco Service-Oriented Network Architecture (SONA) Framework Application Layer This layer contains the business applications and collaborative applications that use interactive services to operate more efficiently or can be deployed quicker and with lower integration costs. Stage 1 : Initiating and Preparation Framework
  15. 15. This layer is a full architecture of several network technologies working together to create functionality that can be used by multiple applications across the network. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1 : Initiating and Preparation Framework
  16. 16. Security Services Ensures that all aspects of the network together to secured pervasively from the edge to the core of the network looking at multiple aspects from passive attacks like viruses to active attacks and segmentation of data types. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1 : Initiating and Preparation Framework
  17. 17. Mobility Services Allows users to access network resources regardless of their physical location but includes more than simple wireless devices. It is also the interaction through the network to allow for seamless layer three mobility and rapid re-association and forwarding of voice and video content. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1 : Initiating and Preparation Framework
  18. 18. Storage Services Provides distributed and virtual storage across the infrastructure enabling additional services such as backup and translational functionality usually requiring additional media servers that need to be separately maintained. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1 : Initiating and Preparation Framework
  19. 19. Voice and Collaboration Services Delivers the foundation by which voice and video streaming can be carried across the network with a high degree of quality while interacting with different data systems all working together as a full service. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1 : Initiating and Preparation Framework
  20. 20. Compute Services Connects and virtualizes compute resources based on the application helping to provide cost effective business continuity as well as a dislocation of specific applications to specific servers. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1 : Initiating and Preparation Framework
  21. 21. Identity Services Maps resources and policies to the user and device for use both by security services and to be used to create preferences for users for collaborative services. Identity service is also utilized by multiple applications to provide single sign-on capabilities. Cisco Service-Oriented Network Architecture (SONA) Framework Interactive Services Layer Stage 1 : Initiating and Preparation Framework
  22. 22. This layer is where all the IT resources are interconnected across a converged network foundation designed as a complete architecture to interoperate with all advanced services, across all places in the network, without requiring re-architecture or forklift upgrades. Cisco Service-Oriented Network Architecture (SONA) Framework Networked Infrastructure Layer Stage 1 : Initiating and Preparation Framework
  23. 23. The network group is responsible in analyzing and generating reports about the current network infrastructure After the analysis and report generation, both teams will discuss every aspects of the problems and provide solutions Business / Collaboration Application Team Network Interactive Service Team Stage 1 : Initiating and Preparation Team Structure
  24. 24. <ul><li>Submit all the current resource status of the network (bandwidths, storage, etc.) </li></ul><ul><li>Identify the currently installed application that which generate errors </li></ul><ul><li>List all applications that exhibits high latency, packet loss and jitter </li></ul><ul><li>Submit future projects that will be deployed and implemented in the future </li></ul><ul><li>Submit all the currently deployed application and running inside the network </li></ul><ul><li>Analyze the optimized application vs. the applications that performs poorly </li></ul>Stage 1 : Initiating and Preparation Roles and Responsibilities Business / Collaboration Application Team Network Interactive Service Team
  25. 25. Existing and future applications must be declared Current network status is identified Remaining network resources must be elaborated and proposed network upgrade solutions is formulated The objectives should be meet in order to proceed to the next stage Stage 1 : Initiating and Preparation Deliverables
  26. 26. To identify and understand the current network environment and possible impact during application deployment To understand and recommend information with regards to cost justification, project initiation and execution Stage 2 : Planning Objectives
  27. 27. Business / Collaboration Application Team Network Interactive Service Team Stage 2 : Planning Team Structure
  28. 28. <ul><li>Measure the cost of load that the application might used versus the resource allocation that currently exist </li></ul><ul><li>List existing resources and analyze where the application is suitable to deploy for testing </li></ul><ul><li>Prepare list of available hardware resources </li></ul><ul><li>Identify the maximum resource usage of the application in local computer </li></ul><ul><li>Identify the applications possible network usage indicating its on-process resource consumption (Front-end communicating to the Back-end) </li></ul><ul><li>Identify the possible risk during system or application crash </li></ul><ul><li>Identify the type of hardware that is suitable for testing and implementation </li></ul>Stage 2 : Planning Roles and Responsibilities Business / Collaboration Application Team Network Interactive Service Team
  29. 29. Current load of the network and all resource consumption of the application must be declared Must identify all applications currently running in the network that might become affected after the impact of the application deployment Contingency plan must be prepared during system and application failure Cost analysis for system upgrades are identified The objectives should be meet in order to proceed to the next stage Stage 2 : Planning Deliverables
  30. 30. To identify the major requirements of the application to be tested, both hardware and software To identify the needs for hardware changes To identify end user’s knowledge ability in using the application To ensure the integrity of the application during runtime Stage 3 : Testing Environment Objectives
  31. 31. To identify the applications performance To identify the network resource consumption To identify the integrity of the contingency plan during software and hardware failure events To provide information from the tools which measures the application and network performance. Stage 3 : Implementation Objectives
  32. 32. Stage 3 : Implementation 4 Stages of Implementation IT Guru IT GURU, does the following: 1) Diagnose – Visualize network, traffic flows and application transactions. Quickly determine root-cause of performance (server, network or client). Audit compliance with network security policies. 2) Validate Changes Prior to Implementation – test network configuration before implementation, right size capacity upgrades, analyze system upgrades, consolidations and relocations. 3) Plan Ahead for Growth and High Availability – establish budgets with quantitative justification, plan upgrades for growth or new facilities, optimize the deployment of new technologies and mission critical applications. We will evaluate three (3) products namely under IT Guru: CISCO Works, HP Manager, Sniffer Pro. 1 Provides a Virtual Network Environment that models the behavior of your entire network. Including it’s routers, switches, protocols, systems and individual applications. By working in the Virtual Network Environment, IT Managers, network and system planners and operations staff are empowered to more effectively diagnose difficult problems, validate changes before they are implemented and plan for future scenarios including growth and failure.
  33. 33. Stage 3 : Implementation 1) Capture (Application Traces) – Capture a ‘finger print’ of the application transaction as it traverses the infrastructure. 2) Visualize (Transactions) – Visualize applications transaction allows both at the application level and the network packet level. Understand the interactions and dependencies among clients, the network, application servers and database servers. 3) Diagnose (Performance Problems) – Identify and diagnose performance bottleneck. Decode capture applications that cause unacceptable processing delays. 4) Validate (Solutions) – Quickly evaluate the impact of changing growth bandwidth protocol settings, application behavior, server speed and network congestion on end-to-end response times. 2 Performance of networked applications depends on complex interactions among applications, servers and networks. IT organizations need detailed, quantitative understanding of these interactions to efficiently and cost-effectively troubleshoot and deploy applications. ACE directly address these challenges. 4 Stages of Implementation ACE (Application Characterization Environment)
  34. 34. Stage 3 : Implementation 3 Provides real-time performance analysis of complex applications by monitoring system and application metrics within each server across all tiers. Panorama automatically spots abnormal vs. normal behavior with advanced deviation tracking and correlation technologies. It automates the otherwise tedious analysis of thousands of application and system metrics across multiple tiers to identify sources of performance problems or potential choke-points. 4 Stages of Implementation Panorama (Real-time Application Analytics)
  35. 35. Stage 3 : Implementation 4 An application service level monitoring solution that provides visibility into interdependent application and infrastructure components, and quantifies SLA compliance. SLA Commander™ employs synthetic transactions to monitor the response time and availability of web applications as seen by end-users, proactively alerting IT operations teams when performance thresholds are exceeded. SLA Commander integrates with OPNET's ACE™ to enable the in-depth analysis of problems that are intermittent or cannot easily be reproduced. 4 Stages of Implementation SLA Commander Key Features • Automated, around-the-clock application monitoring with threshold-based alarms • Convenient web-based dashboard that displays application service levels, enabling at-a-glance identification of problem areas • Comprehensive service model that maps infrastructure and application components to a business service • Early warning alerts to advise support teams of performance degradation • Drill-down analysis into poorly performing services to isolate faults to specific components • Intuitive authoring environment to create test scripts without programming, by recording a user's browser activity • High-fidelity browser playback of scripted transactions • Integration with OPNET's free ACE™ Capture Agents to automatically capture and archive packet traces of problematic transactions for subsequent analysis in ACE
  36. 36. Stage 3 : Implementation Assessing Application Networkability Workflow
  37. 37. Stage 3 : Implementation Methodology 1 The process of capturing application data that accurately reflects the behavior of the application. Capture and Import Application Packet Traces 2 <ul><li>Application response time is a key metric to measure end-user satisfaction. There could be several factors that can degrade response time. These include: decrease server performance due to new server OS.; increase traffic on the network; degraded reliability of the network; unfavorable protocol configuration. </li></ul><ul><li>Common questions related to an application’s performance are: </li></ul><ul><li>How does the application exchange data? How much traffic does it generate on the network? </li></ul><ul><li>What are the key components of the response time? </li></ul><ul><li>What are the causes of end-to-end performance problems (caused either by the network, server, or application)? </li></ul><ul><li>What changes can fix the performance problems? </li></ul>Analyzing the Application
  38. 38. 3 The network impact can be studied by changing network parameters (bandwidth, latency, packet loss, link utilization, TCP window size, etc.) on the application response time. An example is by plotting the application response time against any one parameter while keeping the others fixed. In general, the application response time should decrease if you increase bandwidth and/or reduce packet loss, link utilization and latency. Study Network Impact Stage 3 : Implementation Methodology 4 Changes in the application behavior will cause changes in the underlying network data exchange. Modifying the number of application turns, application bytes, and the processing times on relevant tiers will produce a data exchange pattern that reflects the application behavior. Modify Application Characteristics
  39. 39. 5 The following are the steps in simulating the application: Auto-Create the Basic Topology The model and configuration of the topology is based on the number of tiers. Specifying its LAN segments will help to specify other parameters such as loss and latency, in addition to the WAN technology (IP, ATM or Frame Relay) and the bandwidth. Selecting the appropriate device models enables you to capture application packet traces in the simulation in the same way capturing the protocol traces in the real world. Determine Propagation Delay and Latency The discrete event simulation’s default method of determining the propagation delay using a “line-of-sight” geographic distance may often give a propagation delay that is too low because, for example, the actual network links may not follow a true line-of-sight. Therefore, it is often important to explicitly set latency/propagation attribute values when simulating application traffic, especially when doing application response time studies over TCP. Simulate the Application Stage 3 : Implementation Methodology
  40. 40. Tune Protocols and Set Parameters The parameters of the client and servers are typically the most important. In general, depending on the protocols and devices you have chosen, there may be many parameters. Advance versions of device model gives access to the broadest range of parameters. Parameters for TCP are often the most influential when working with applications that make use of this protocol. The advance versions of client and server models provide a full complement of TCP parameters that can be control controlled. Understand the Important TCP Parameters “ TCP Delayed Acknowledgement Mechanism” controls how the delayed “dataless” acknowledgements are sent by the TCP connection process. Note that TCP does not send an ACK the instant that it receives data. Instead, it delays the ACK, hoping that it will have data to send with it (called “ACK piggybacking”). “ TCP Maximum Acknowledgement Delay” is the longest time that a TCP connection process waits to send an ACK after receiving data. “ TCP Receive Buffer Usage Threshold” affects the window size of the TCP connection. The window size is the amount of space available in the receive buffer. The usage threshold determines when data should be transferred from TCPs receive buffer to the application, thereby allowing the receive window to open further. Stage 3 : Implementation Methodology: Simulate the Application
  41. 41. Run the Initial Simulation and Get Results Choosing a Simulation Duration and Selecting Statistics obtains the measurement of the response time, allowing to confirm that each element is behaving as expected. These should include application response time, server load and task-processing-related statistics, link utilization, and send and received data throughputs for the application. Running the Initial Simulation and Validating the Application Response Time The overall simulated response time for the application’s transactions may not match what you observe on your actual network because of several factors, such as the network is not fully modeled or the protocol parameters are not yet tuned. The packet analyzer captures a trace of the application task that is being simulated. Importing application packet trace allows to compare the statistics and diagnosis to the that was originally imported from a live network. In most cases, the results will match closely. The result will not match if the protocol parameters are not configured appropriately or have not taken into account the effect of other users. Represent the Server Servers are highly complex devices compose of numerous subsystems that perform tasks with varying degrees of concurrency. Further more, the behavior of various applications and operating systems vary greatly from vendor to vendor, and even from revision to revision, due to patches and upgrades. As a result, creating models of server performance can be difficult but can be easier if the models built are not that complex. Stage 3 : Implementation Methodology: Simulate the Application
  42. 42. 6 To model the effect of other users and traffic sources, be sure to create appropriate load on the various components in the path. While it exceeds the scope of this methodology, obtaining load information is often done using network performance management tools that monitor statistics gathered by agents in the network. Run Simulations and Get Results Once the topology is built, include the effect of other users, and tune all the relevant protocol parameters, run the simulation and obtain results. Troubleshoot Application Response Times Load and possible congestion in the network can be the source of delay why the compare response time between the simulation and the actual network is not observed. Congestion is indicated by repeated sequences of numbers. The retransmissions of signal packets that are being dropped can be a significant contributor to lagging application response times. Client Relocation Approach Moving the client to various locations is an effective approach to locate the source of additional delay in the path between client and server. By “plugging” the client into different locations along the path, takin response time measurements, can obtain an estimate of the contribution of each segment of the path to overall response time. Model the Effect of Other Users and Traffic Sources Stage 3 : Implementation Methodology
  43. 43. Ping Approach Instead of physically moving the client to different locations, “Ping Command” can determines the roundtrip times from the client to other components in the path, provided that thos components also use the IP protocol. The round-trip times gives an idea where latency is in the path. Stage 3 : Implementation Methodology: Model the Effect of Other Users and Traffic sources 7 Results can be analyze by viewing the output of the simulation in the form of graphs and statistics. These result allows to iteratively construct the what-if scenarios and study the impact of the changes on the application. Analyze Results 8 Reports is used to demonstrate and allows the collaborator to understand more about the applications’ performance test result. Visual displays and graphs are the essential report design to demonstrate the key findings effectively. Generate Reports
  44. 44. Stage 3 : Implementation Team Structure Collaboration Application Group Business Application Group
  45. 45. <ul><li>Maintain the network connectivity </li></ul><ul><li>Record the applications network resource consumption </li></ul><ul><li>Verify the status of the other applications working in the same network </li></ul><ul><li>Provide graphical result of the application’s performance in timely basis </li></ul><ul><li>Maintain the application’s accuracy and data integrity </li></ul><ul><li>Measure the amount of data being downloaded and uploaded </li></ul><ul><li>Perform on the spot analysis, evaluation and immediate response in the event of application failure </li></ul>Stage 3 : Implementation Roles and Responsibilities Business / Collaboration Application Team Network Interactive Service Team
  46. 46. Network and application’s performance are measured Failure events are recorded Contingency plans are performed End-users are well trained The objectives should be meet in order to proceed to the next stage Stage 3 : Implementation Deliverables
  47. 47. To identify the major requirements of the application to be tested, both hardware and software To identify the needs for hardware changes To identify end user’s knowledge ability in using the application To ensure the integrity of the application during runtime Stage 4 : Testing Environment Objectives
  48. 48. Business / Collaboration Application Team Network Interactive Service Team Stage 4 : Testing Environment Team Structure
  49. 49. <ul><li>Identify the application’s network traffic requirement </li></ul><ul><li>Identify the maximum resource requirements of the application based on number of clients </li></ul><ul><li>Identify the application’s access rights </li></ul><ul><li>Check the accuracy of the connectivity of the multi-tier network </li></ul><ul><li>Identify the application’s server and client software and hardware requirements </li></ul><ul><li>Test the integrity of the application. It’s module, back-end and front end connectivity and response time </li></ul><ul><li>Check application’s local system resource consumption </li></ul><ul><li>Check the affected application during runtime </li></ul>Stage 4 : Testing Environment Roles and Responsibilities Business / Collaboration Application Team Network Interactive Service Team
  50. 50. All software, hardware and network performance are identified Application integrity and connectivity are measured Connectivity issues of all tier are tested and recorded Enhancements and module revisions are identified Hardware requirements are identified End-user’s application usage skills are evaluated The objectives should be meet in order to proceed to the next stage Stage 4 : Testing Environment Deliverables
  51. 51. To determine the impact of the application into the live network infrastructure To verify the end result of the application simulation To evaluate the reporting performance To identify the enhancements needed into the application based on the implementation result Stage 5 : Analyzing Baseline Scenario Objectives
  52. 52. Stage 5 : Analyzing Baseline Scenario Accessing Application Impact
  53. 53. Stage 5 : Analyzing Baseline Scenario Methodology 1 The process of capturing application data that accurately reflects the behavior of the application. Capture and Import Application Packet Traces 2 <ul><li>Navigate the Application Packet Trace and Answer Some Basic Questions </li></ul><ul><li>After importing the application packet trace, a survey can be performed from the data exchange chart. Questions below are some factors that needs to be answered to analyze the application: </li></ul><ul><li>Does the application trace contain only one task? Does it contain portions of a previous or a succeeding task. </li></ul><ul><li>Does the application packet trace look like what is expected? </li></ul><ul><li>Does the amount of traffic look accurate? </li></ul><ul><li>Are there huge delays or “gasps” in the diagram? </li></ul><ul><li>How does the application chart and network chart compare? </li></ul><ul><li>Are the packet sizes what is expected? </li></ul><ul><li>What is the general direction of traffic? </li></ul>Analyzing the Application
  54. 54. <ul><li>Perform Detail Application Analysis </li></ul><ul><li>Detailed analysis on application performance can be obtain by answering few questions: </li></ul><ul><li>What are the components of the application response time? </li></ul><ul><li>Is the application , utilizing network resources adequately? </li></ul><ul><ul><li>By using a graph, the network throughput, application throughput, and the TCP in-flight data can be assessed. </li></ul></ul><ul><li>Can you relate the time spent on the server with the server performance? </li></ul><ul><ul><li>By relating the performed data and the server statistics, the performance at a specific tier change is viewable whenever a transaction is committed. </li></ul></ul><ul><li>Compare Network and Application Charts </li></ul><ul><li>To see how an application message was transported across the network, compare the network and application charts. There are a number of protocol effects, such as TCP ACK, Nagle’s algorithm, or TCP retransmissions that can be recognize by viewing the traffic pattern in the network data exchange rate. </li></ul><ul><li>Validate the Import </li></ul><ul><li>To validate the application data exchange chart, ensure that the application message transfers do not “cross” for a particular connection. </li></ul>Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario
  55. 55. Provide Diagnoses and Statistics The diagnosis and statistics include the delays on each tier, the packer sizes, protocol delays, network transmission delays, propagation delays and so on. The diagnosis is based on different interpretations of the statistic data. If the value in a diagnosis exceeds its threshold, it is considered a “Bottleneck”. If it is close to the threshold, it is considered a “Potential Bottleneck”. If it is below the potential bottleneck range, it is considered to be “No Bottleneck”. Processing delay bottleneck is the processing time expressed as a percentage of the total response time. This delay represents the time taken due to operations within the machine, such as file I/O. CPU time, disk time, or memory access. Protocol overhead bottleneck is the total protocol overhead expressed as a percentage of the total amount of data transferred. Each protocol adds overhead to an application message in the form of headers. Protocols send packets that do not contain application data such as ACK. These packets are also counted as protocol overhead. Chattiness bottleneck is the number of application bytes per application turn. If an application is “chatty”, the data sent in each application is small. This may cause significant network delays and also processing delays at each tier since each tier now has to handle many litter messages. Network cost of chattiness bottleneck is the total network delay incurred due to application turns represented as a percentage of the total application response time. Applications that send many small packets back-and-forth incur a network delay. This delay becomes significant if there is a high latency link. Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario
  56. 56. Provide Diagnoses and Statistics (continued…) Propagation delay bottleneck is the time taken by the packets to propagate across the network represented as a percentage of the total application response time. Propagation delay is a function of the distance traveled and the speed of light. Device latencies can also add to this bottleneck. Transmission delay bottleneck is the transmission delay caused by line speeds expressed as a percentage of the total application response time. The transmission delay is a function of the total bytes transmitted and the line speed. Protocol delay bottleneck is the total delay due to protocol effects represented as a percentage of the total application response time. Examples of protocol effects are TCP flow control, congestion control, delay due to retransmissions, and collisions. Connection resets bottleneck is the total percentage of packets that were retransmitted. Protocols such as TCP retransmit a packet if they detect a long latency or a packet loss. Retransmission causes delays and additional protocol overhead. TCP also reduces the rate at which applications can send traffic when a retransmission occurs as a means of congestion control. This causes additional throttling of application traffic. Packet loss or unusual delays that trigger retransmissions can occur as a result of “bursty” application traffic, overflowing queues, misbehaving devices and link or node failures. Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario
  57. 57. Provide Diagnoses and Statistics (continued…) TCP windowing bottleneck is the bandwidth-delay product used by the TCP connection. When an application sends bulk data over a TCP connection, the TCP window size should be large enough to permit TCP to send many packets in a row without having to wait for TCP ACK. TCP frozen window bottleneck is the advertised TCP Receive Window that has dropped to a value smaller than the Maximum Segment Size (MSS). When this occurs, the sender cannot send any data until the receive window is one MSS or larger. To determine if the receive window has become larger, the sending side periodically sends on-byte probe packets. The contents of these probe packets depend on the particular implementation, but they are usually sent with an exponential back off. The common reason for the frozen window is that the application on the receiving side is not taking data from the TCP receive buffer quickly enough. TCP Nagle’s algorithm bottleneck indicates that Nagle’s algorithm is present and is slowing application response times. Nagle’s algorithm is a sending-side algorithm that reduces the number of small packets on the network, thereby increasing router efficiency. Nagle’s algorithm causes excessive numbers of delayed ACKs and slows down the application. Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario
  58. 58. Recommendations The implications of each diagnosis and our recommendations for correcting the problem are described below: Processing delay - Improve overall speed of the machine by adding faster processors, faster disks and more memory. Consider revamping the application so it uses machine resources more efficiently. E.g. Database application can benefit from indexing, transferring large records at once, and redesigning database queries. Protocol overhead – Consider sending larger application packets. This reduces the amount of header information that the protocol has to add, as there will be fewer application messages. Protocols such as TCP will also reduce the number of ACKs that have to be transmitted. Chattiness – Send fewer small application messages. Modify the application logic so that more data is sent in parallel. If a database is fetching one record at a time, try modifying it so that it obtains all the requested records, stores them in a structure, and sends the structure all at once. Network cost of chattiness – If the application is incurring significant network delay due to chattiness, try to eliminate the “chattiness” bottleneck. Consider reducing the transmission and propagation delay between tiers. Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario
  59. 59. Recommendations (continued…) Propagation delay – Move the affected tiers closer together. Use intermediate devices that are faster, that is, once that have a smaller latency. Use a utility program to examine actual network conditions. Transmission delay – Increase the line speed and reduce the number of hops that the messages have to traverse. Use a utility program to examine actual network conditions. Protocol delay – Retransmissions or unusual latencies are the causes of protocol delay. If the protocol is TCP and has an application sending small packets, check to see if the application has enabled Nagle’s algorithm. This algorithm causes small messages to wait until larger segments are formed for efficient transmission. However this adversely affects interactive applications that send many little messages back and forth. Connection resets – A reset implies that a connection could not be completed, or the connection was disconnected because the peers could not contact each other. A small number of resets are fairly common for applications such as HTTP, but if there are large number of resets, check if there is loss on connectivity among the tier pairs. Retransmissions – These are caused by loss or long delays. Eliminate the cause of the packet loss or the long delay. There are some networks that you have no control over, such as the Internet. Try to use different technologies such as VPN or IP tunneling, or attempt to obtain a higher Quality-of-Service (QoS) from the ISP. Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario
  60. 60. Recommendations (continued…) TCP windowing – Use larger TCP send and receive windows. These windows should be greater than the bandwidth-delay product for the connection. Use newer versions of TCP that have options such as SACK. Most operating system allows modification of select set of TCP parameters. TCP frozen window – Try to send less data, have the receiving application retrieve the data quickly. If the application cannot process all the data at once, consider storing the data in another buffer. Upgrade the receiving computer. TCP Nagle’s algorithm – Disable Nagle’s algorithm for this application. Rewrite the application such that it sends fewer, larger packets, or does not encounter a TCP delayed ACK. Configure TCP on the receiving host so that TCP acknowledges every packet it receives. Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario Succeeding methodologies are explained in detail at Stage 5
  61. 61. Stage 5 : Analyzing Baseline Scenario Team Structure Collaboration Application Group Business Application Group
  62. 62. <ul><li>Identify the application’s network traffic requirement </li></ul><ul><li>Identify the maximum resource requirements of the application based on number of clients </li></ul><ul><li>Identify the application and user access rights </li></ul><ul><li>Perform documentation on all data gathered </li></ul><ul><li>Identify the application’s major requirements </li></ul><ul><li>Measure the application’s integrity and perform recursive simulation </li></ul><ul><li>Identify the applications minimum and required hardware requirements </li></ul><ul><li>perform documentation on all data gathered </li></ul>Stage 5 : Analyzing Baseline Scenario Roles and Responsibilities Business / Collaboration Application Team Network Interactive Service Team
  63. 63. Network allocation resources are identified Application’s integrity, data connectivity and reporting performance are measured Future application enhancements are identified All plans are created in preparation for the Go Live Stage Stage 5 : Analyzing Baseline Scenario Deliverables
  64. 64. Stage 5 : Analyzing Baseline Scenario Methodology 1 The process of capturing application data that accurately reflects the behavior of the application. Capture and Import Application Packet Traces 2 <ul><li>Navigate the Application Packet Trace and Answer Some Basic Questions </li></ul><ul><li>After importing the application packet trace, a survey can be performed from the data exchange chart. Questions below are some factors that needs to be answered to analyze the application: </li></ul><ul><li>Does the application trace contain only one task? Does it contain portions of a previous or a succeeding task. </li></ul><ul><li>Does the application packet trace look like what is expected? </li></ul><ul><li>Does the amount of traffic look accurate? </li></ul><ul><li>Are there huge delays or “gasps” in the diagram? </li></ul><ul><li>How does the application chart and network chart compare? </li></ul><ul><li>Are the packet sizes what is expected? </li></ul><ul><li>What is the general direction of traffic? </li></ul>Analyzing the Application
  65. 65. <ul><li>Perform Detail Application Analysis </li></ul><ul><li>Detailed analysis on application performance can be obtain by answering few questions: </li></ul><ul><li>What are the components of the application response time? </li></ul><ul><li>Is the application , utilizing network resources adequately? </li></ul><ul><ul><li>By using a graph, the network throughput, application throughput, and the TCP in-flight data can be assessed. </li></ul></ul><ul><li>Can you relate the time spent on the server with the server performance? </li></ul><ul><ul><li>By relating the performed data and the server statistics, the performance at a specific tier change is viewable whenever a transaction is committed. </li></ul></ul><ul><li>Compare Network and Application Charts </li></ul><ul><li>To see how an application message was transported across the network, compare the network and application charts. There are a number of protocol effects, such as TCP ACK, Nagle’s algorithm, or TCP retransmissions that can be recognize by viewing the traffic pattern in the network data exchange rate. </li></ul><ul><li>Validate the Import </li></ul><ul><li>To validate the application data exchange chart, ensure that the application message transfers do not “cross” for a particular connection. </li></ul>Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario
  66. 66. Provide Diagnoses and Statistics The diagnosis and statistics include the delays on each tier, the packer sizes, protocol delays, network transmission delays, propagation delays and so on. The diagnosis is based on different interpretations of the statistic data. If the value in a diagnosis exceeds its threshold, it is considered a “Bottleneck”. If it is close to the threshold, it is considered a “Potential Bottleneck”. If it is below the potential bottleneck range, it is considered to be “No Bottleneck”. Processing delay bottleneck is the processing time expressed as a percentage of the total response time. This delay represents the time taken due to operations within the machine, such as file I/O. CPU time, disk time, or memory access. Protocol overhead bottleneck is the total protocol overhead expressed as a percentage of the total amount of data transferred. Each protocol adds overhead to an application message in the form of headers. Protocols send packets that do not contain application data such as ACK. These packets are also counted as protocol overhead. Chattiness bottleneck is the number of application bytes per application turn. If an application is “chatty”, the data sent in each application is small. This may cause significant network delays and also processing delays at each tier since each tier now has to handle many litter messages. Network cost of chattiness bottleneck is the total network delay incurred due to application turns represented as a percentage of the total application response time. Applications that send many small packets back-and-forth incur a network delay. This delay becomes significant if there is a high latency link. Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario
  67. 67. Provide Diagnoses and Statistics (continued…) Propagation delay bottleneck is the time taken by the packets to propagate across the network represented as a percentage of the total application response time. Propagation delay is a function of the distance traveled and the speed of light. Device latencies can also add to this bottleneck. Transmission delay bottleneck is the transmission delay caused by line speeds expressed as a percentage of the total application response time. The transmission delay is a function of the total bytes transmitted and the line speed. Protocol delay bottleneck is the total delay due to protocol effects represented as a percentage of the total application response time. Examples of protocol effects are TCP flow control, congestion control, delay due to retransmissions, and collisions. Connection resets bottleneck is the total percentage of packets that were retransmitted. Protocols such as TCP retransmit a packet if they detect a long latency or a packet loss. Retransmission causes delays and additional protocol overhead. TCP also reduces the rate at which applications can send traffic when a retransmission occurs as a means of congestion control. This causes additional throttling of application traffic. Packet loss or unusual delays that trigger retransmissions can occur as a result of “bursty” application traffic, overflowing queues, misbehaving devices and link or node failures. Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario
  68. 68. Provide Diagnoses and Statistics (continued…) TCP windowing bottleneck is the bandwidth-delay product used by the TCP connection. When an application sends bulk data over a TCP connection, the TCP window size should be large enough to permit TCP to send many packets in a row without having to wait for TCP ACK. TCP frozen window bottleneck is the advertised TCP Receive Window that has dropped to a value smaller than the Maximum Segment Size (MSS). When this occurs, the sender cannot send any data until the receive window is one MSS or larger. To determine if the receive window has become larger, the sending side periodically sends on-byte probe packets. The contents of these probe packets depend on the particular implementation, but they are usually sent with an exponential back off. The common reason for the frozen window is that the application on the receiving side is not taking data from the TCP receive buffer quickly enough. TCP Nagle’s algorithm bottleneck indicates that Nagle’s algorithm is present and is slowing application response times. Nagle’s algorithm is a sending-side algorithm that reduces the number of small packets on the network, thereby increasing router efficiency. Nagle’s algorithm causes excessive numbers of delayed ACKs and slows down the application. Methodology: Analyzing the Application Stage 5 : Analyzing Baseline Scenario
  69. 69. Stage 5 : Analyzing Baseline Scenario Team Structure Collaboration Application Group Business Application Group
  70. 70. <ul><li>Identify the application’s network traffic requirement </li></ul><ul><li>Identify the maximum resource requirements of the application based on number of clients </li></ul><ul><li>Identify the application and user access rights </li></ul><ul><li>Perform documentation on all data gathered </li></ul><ul><li>Identify the application’s major requirements </li></ul><ul><li>Measure the application’s integrity and perform recursive simulation </li></ul><ul><li>Identify the applications minimum and required hardware requirements </li></ul><ul><li>perform documentation on all data gathered </li></ul>Stage 5 : Analyzing Baseline Scenario Roles and Responsibilities Business / Collaboration Application Team Network Interactive Service Team
  71. 71. Network allocation resources are identified Application’s integrity, data connectivity and reporting performance are measured Future application enhancements are identified All plans are created in preparation for the Go Live Stage The objectives should be meet in order to proceed to the next stage Stage 5 : Analyzing Baseline Scenario Deliverables
  72. 72. To identify the deployment process of the application to the live servers To identify the actual impact of the application deployment to other applications currently running on the network To verify accuracy and credibility of data exchange between client and servers Stage 6 : Go Live Scenario Objectives
  73. 73. Stage 6 : Go Live Scenario Team Structure Collaboration Application Group Business Application Group
  74. 74. <ul><li>Identify the application’s network traffic requirement </li></ul><ul><li>Identify the maximum resource requirements of the application based on number of clients </li></ul><ul><li>Identify the application’s access rights </li></ul><ul><li>Analyze the procedure of deployment of the application to the live server </li></ul><ul><li>Check the applications accuracy of data exchange between the client and the server </li></ul><ul><li>Performs analysis and evaluation </li></ul>Stage 6 : Go Live Scenario Roles and Responsibilities Business / Collaboration Application Team Network Interactive Service Team
  75. 75. Recorded result of application and network performance upon deployment Result analysis of the hardware performance Identification of the weak parts of the network Stage 6 : Go Live Scenario Deliverables
  76. 76. To finalize end result and present the output to Top Management To document the projects related issues including software documentation and summarization Stage 7 : Project Closing Objectives
  77. 77. Stage 7 : Project Closing Team Structure Collaboration Application Group Business Application Group
  78. 78. <ul><li>Present the end result of the network’s performance after deployment </li></ul><ul><li>Documented the existing network infrastructure after deployment including present network allocation status </li></ul><ul><li>Present the end result of the application’s performance after deployment </li></ul><ul><li>Document the application performance project </li></ul>Stage 7 : Project Closing Roles and Responsibilities Business / Collaboration Application Team Network Interactive Service Team
  79. 79. Documentation of the project must be present Project Review Project turnover to CCIS from vendor Stage 7 : Project Closing Deliverables Establishes action plans for identified additional needs

×