• Save
 Dot Net performance monitoring
Upcoming SlideShare
Loading in...5

Dot Net performance monitoring



Dot net performance monitoring

Dot net performance monitoring



Total Views
Views on SlideShare
Embed Views



0 Embeds 0

No embeds



Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment
  • For more information about using this tool, see “How To: Use CLR Profiler” in the“How To” section of this guide.

 Dot Net performance monitoring Dot Net performance monitoring Presentation Transcript

  • ASP.net Performance Monitoring andAnalysis – Part 1Kranthi Paidi1
  • Foreword2 All the material in this power point presentation is aconsolidation of what ever I read and learnt on theinternet, many technical forums, Microsoft’s MSDN andbooks. All the credit goes to the following: MSDN technet website (http://technet.microsoft.com/en-us/ms376608.aspx) Improving .net application performance and Scalability(http://www.amazon.com/Improving-Application-Performance-Scalability-Practices/dp/0735618518/ref=sr_1_1?ie=UTF8&qid=1351449492&sr=8-1&keywords=improving+.net+application+performance+and+scalability+patterns+and+practices)
  • Part 1 – Session Goals3 Understand general set of performance counters for allMicrosoft (some of them are technology agnostic) webapplications Understand specific set of performance counters for CLRand ASP.net framework Understand the impact of trends in various performancecounters A basic idea on different ways to instrument and measurethe performance of .NET applications A basic understanding of View State
  • Performance Objectives4Performance objectives for any web application include the following: Response time or latency Throughput Resource utilization
  • Response Time or Latency5Latency measured at the server.This is the time taken by the server to complete the execution of a request. This doesnot take into account the client-to-server latency, which includes additional time forthe request and response to cross the network.Latency measured at the client.The latency measured at the client includes the request queue, the time taken by theserver to complete the execution of the request, and the network latency. We canmeasure this latency in various ways. Two common approaches are to measure thetime taken by the first byte to reach the client (time to first byte, or TTFB) or the timetaken by the last byte of the response to reach the client (time to last byte, or TTLB).Generally, We should test this using various network bandwidths between the clientand the server.
  • Throughput6Throughput is the number of requests that can be successfully served by Ourapplication per unit time. It can vary depending on the load (number of users) appliedto the server. Throughput is usually measured in terms of requests per second. Insome systems, throughput may go down when there are many concurrent users. Inother systems, throughput remains constant under pressure but latency begins tosuffer, perhaps due to queuing. Other systems have some balance between maximumthroughput and overall latency under stress.
  • Resource Utilization7Resource UtilizationWe identify resource utilization costs in terms of server and network resources.The primary resources are the following: CPU Memory Disk I/O Network I/OWe can identify the resource cost on a per-operation basis. Operations might includebrowsing a product catalog, adding items to a shopping cart, or placing an order. Wecan measure resource costs for a given user load or We can average resource costswhen the application is tested using a given workload profile.
  • Metrics Matter8Measuring involves collecting data during the various stages of our application’sdevelopment life cycle. We might need to collect data during prototyping, applicationdevelopment, performance testing, and tuning, and in a production environment. Thefollowing tools and techniques help us to collect data: System and platform metrics Network monitoring tools Profiling tools Tools for analyzing log files Application instrumentation
  • System and Platform Metrics9System Monitor.This is a standard component of the Windows operating system. We can use it tomonitor performance objects and counters and instances of various hardware andsoftware components.Microsoft Operations Manager (MOM) (System Center 2012)We can install MOM agents on individual servers that collect data and send it acentralized MOM server. The data is stored in the MOM database, which can be adedicated SQL Server or the Microsoft SQL Server 2000 Desktop Engine (MSDE)version of Microsoft SQL Server. MOM is suitable for collecting large amounts of dataover a long period of time.(http://technet.microsoft.com/en-us/library/hh205987.aspx)Stress tools such as LoadRunner or VSTSWe can use tools such as LR to simulate clients and collect data during the duration ofa test.
  • Network Monitoring Tools10Internet Protocol Security (IPSec) monitor.We can use IPSec monitor to confirm whether our secured communications aresuccessful. In this way we can monitor any possible pattern of security-related orauthentication-related failures.Network Monitor (NetMon).We use NetMon to monitor the network traffic. We can use it to capture packets sentbetween client and server computers. It provides valuable timing information as wellas packet size, network utilization, addressing, and routing information and manyother statistics that We can use to analyze system performance.
  • Profiling Tools11CLR ProfilerThis allows us to create a memory allocation profile for our application so we can seehow much allocation occurs, where it occurs, and how efficient the garbage collectoris within our application. By using the various profile views, We can obtain manyuseful details about the execution, allocation, and memory consumption of ourapplication.SQL ProfilerThis profiling tool is installed with SQL Server. We can use it to identify slow andinefficient queries, deadlocks, timeouts, recompilations, errors, and exceptions for anydatabase interactions.SQL Query Analyzer.This tool is also installed with SQL Server. We can use it to analyze the execution plansfor SQL queries and stored procedures. This is mostly used in conjunction with the SQLProfiler.
  • Log Files12Analyzing log files is an important activity when tuning or troubleshooting our liveapplication. Log files can provide usage statistics for various modules of theapplication, a profile of the users accessing our application, errors and exceptions,together with other diagnostics that can help identify performance issues and theirpossible cause.We can analyze various logs including Internet Information Services (IIS) logs, SQLServer logs, Windows event logs, and custom application logs. We can use variousthird-party tools to analyze and extract details from these log files.Examples of log analyzers:LogparserGoogle AnalyticsOmniture
  • Application Instrumentation13In addition to the preceding tools, We can instrument our code for capturingapplication-specific information. This form of information is much more fine-grainedthan that provided by standard system performance counters. It is also a great way tocapture metrics around specific application scenarios.For example, instrumentation enables us to measure how long it takes to add an itemto a shopping cart, or how long it takes to validate a credit card number.Instrumentation is the process of adding code to our application to generate events toallow us to monitor application health and performance. The events are generallylogged to an appropriate event source and can be monitored by suitable monitoringtools such as MOM. An event is a notification of some action.Instrumentation allows us to perform various tasks:Profile applicationsProfiling enables us to identify how long a particular method or operation takes to runand how efficient it is in terms of CPU and memory resource usage.
  • Application Instrumentation - Options14Collect custom data.This might include custom performance counters that we use to monitor application-specific activity, such as how long it takes to place an order.Trace codeThis allows us to understand the application code path and all the methods run for aparticular use case.We have several options: Event Tracing for Windows (ETW) Window Management Instrumentation (WMI) Custom performance counters Enterprise Instrumentation Framework (EIF) Trace and Debug classes
  • Application Instrumentation – Decision Making15What do We want to accomplish with instrumentation?There are various goals for instrumentation. For example, if we need only debuggingand tracing features, we might use these features mostly in a test environment whereperformance is not an issue. But if we plan to log the business actions performed byeach user, this is very important from a performance point of view and needs to beconsidered at design time. If we need to trace and log activity, we need to opt for atool that lets us specify various levels of instrumentation through a configuration fileHow frequently do We need to log events?Frequency of logging is one of the most important factors that helps We decide theright choice of the instrumentation tool for We. For example, the event log is suitableonly for very low-frequency events, whereas a custom log file or ETW is more suitablefor high-frequency logging. The custom performance counters are best for long-termtrending such as week-long stress tests.Is our instrumentation configurable?In a real-life scenario for a typical application, we might want to instrument it so thedata collected is helpful in various application stages, such as development(debugging/tracing), system testing, tuning, and capacity planning.
  • Application Instrumentation - when and what16Based on the preceding considerations, the usage scenarios for the various options areas follows.Event Tracing for Windows (ETW)ETW is suitable for logging high-frequency events such as errors, warnings or audits.The frequency of logging can be in hundred of thousands of events each second in arunning application. This is ideal for server applications that log the business actionsperformed by the users. The amount of logging done may be high, depending on thenumber of concurrent users connected to the server.Windows Management Instrumentation (WMI)Windows Management Instrumentation is the core management technology built intothe Windows operating system. WMI supports a very wide range of management toolsthat lets We analyze the data collected for Our application. Logging to a WMI sink ismore expensive than other sinks, so We should generally do so only for critical andhigh-visibility events such as a system driver failure.
  • Application Instrumentation - when and what17Enterprise Instrumentation Framework (EIF)EIF permits .NET applications to be instrumented to publish a broad spectrumof information such as errors, warnings, audits, diagnostic events, andbusiness specific events. We can configure which events we generate andwhere the events go, to the Windows Event Log service or SQL Server, forexample. EIF encapsulates the functionality of ETW, the Event Log service, andWMI. EIF is suitable for large enterprise applications where We need variouslogging levels in Our application across the tiers. EIF also provides We aconfiguration file where We can turn on or turn off the switch for logging of aparticular counter. EIF also has an important feature of request tracing thatenables We to trace business processes or application services by followingan execution path for an application spanning multiple physical tiers.EIF isavailable as a free download from the Microsoft Download Center at theMicrosoft Enterprise Instrumentation Framework page athttp://www.microsoft.com/downloads/details.aspx?FamilyId=80DF04BC-267D-4919-8BB4-1F84B7EB1368&displaylang=en
  • Application Instrumentation - when and what18Trace and Debug ClassesThe Trace and Debug classes allow us to add functionality in our application mostly fordebugging and tracing purposes. The code added using the Debug class is relevant fordebug builds of the application. We can print the debugging information and checklogic for assertions for various regular expressions in our code. We can write thedebug information by registering a particular listener.Example:public static void Main(string[] args){Debug.Listeners.Add(new TextWriterTraceListener(Console.Out));Debug.AutoFlush = true;Debug.WriteLine("Entering Main");Debug.WriteLine("Exiting Main");}
  • System Resources19When we need to measure how many system resources our application consumes, weneed to pay particular attention to the following:ProcessorProcessor utilization, context switches, interrupts and so on.MemoryAmount of available memory, virtual memory, and cache utilization.NetworkPercent of the available bandwidth being utilized, network bottlenecks.Disk I/OAmount of read and write disk activity. I/O bottlenecks occur if readand write operations begin to queue.
  • Processor20To measure processor utilization and context switching, we can use the followingcounters:Processor% Processor TimeThreshold: The general figure for the threshold limit for processors is 85 percent.Significance: This counter is the primary indicator of processor activity. High valuesmany not necessarily be bad. However, if the other processor-related counters areincreasing linearly such as % Privileged Time or Processor Queue Length, high CPUutilization may be worth investigating.Processor% Privileged TimeThreshold: A figure that is consistently over 75 percent indicates a bottleneck.Significance: This counter indicates the percentage of time a thread runs in privilegedmode. When Our application calls operating system functions (for example to performfile or network I/O or to allocate memory), these operating system functions areexecuted in privileged mode.
  • Processor21Processor% Interrupt TimeThreshold: Depends on processor.Significance: This counter indicates the percentage of time the processor spendsreceiving and servicing hardware interrupts. This value is an indirect indicator of theactivity of devices that generate interrupts, such as network adapters. A dramaticincrease in this counter indicates potential hardware problems.SystemProcessor Queue LengthThreshold: An average value consistently higher than 2 indicates a bottleneck.Significance: If there are more tasks ready to run than there are processors, threadsqueue up. The processor queue is the collection of threads that are ready but not ableto be executed by the processor because another active thread is currently executing.A sustained or recurring queue of more than two threads is a clear indication of aprocessor bottleneck. We can use this counter in conjunction with the Processor%Processor Time counter to determine if our application can benefit from more CPUs.In a multiprocessor computer, divide the Processor Queue Length (PQL) value by thenumber of processors servicing the workload.
  • Processor22If the CPU is very busy (90 percent and higher utilization) and the PQL average isconsistently higher than 2 per processor, we may have a processor bottleneck thatcould benefit from additional CPUs. Or, we could reduce the number of threads andqueue more at the application level. This will cause less context switching, and lesscontext switching is good for reducing CPU load. The common reason for a PQL of 2 orhigher with low CPU utilization is that requests for processor time arrive randomly andthreads demand irregular amounts of time from the processor. This means that theprocessor is not a bottleneck but that it is our threading logic that needs to beimproved.
  • Processor23SystemContext Switches/secThreshold: As a general rule, context switching rates of less than 5,000 per second perprocessor are not worth worrying about. If context switching rates exceed 15,000 persecond per processor, then there is a constraint. Significance: Context switchinghappens when a higher priority thread preempts a lower priority thread that iscurrently running or when a high priority thread blocks. High levels of contextswitching can occur when many threads share the same priority level. This oftenindicates that there are too many threads competing for the processors on the system.If We do not see much processor utilization and we see very low levels of contextswitching, it could indicate that threads are blocked.
  • Memory24MemoryAvailable MbytesThreshold: A consistent value of less than 20 to 25 percent of installed RAM is anindication of insufficient memory.Significance: This indicates the amount of physical memory available to processesrunning on the computer. Note that this counter displays the last observed value only.It is not an average.MemoryPage Reads/secThreshold: Sustained values of more than five indicate a large number of page faultsfor read requests.Significance: This counter indicates that the working set of our process is too large forthe physical memory and that it is paging to disk. It shows the number of readoperations, without regard to the number of pages retrieved in each operation. Highervalues indicate a memory bottleneck. If a low rate of page-read operations coincideswith high values for Physical Disk% Disk Time and Physical DiskAvg. Disk QueueLength, there could be a disk bottleneck. If an increase in queue length is notaccompanied by a decrease in the pages-read rate, a memory shortage exists.
  • Memory25MemoryPages/secThreshold: Sustained values higher than five indicate a bottleneck.Significance: This counter indicates the rate at which pages are read from or written todisk to resolve hard page faults. Multiply the values of the Physical DiskAvg. Disksec/Transfer and MemoryPages/sec counters. If the product of these countersexceeds 0.1, paging is taking more than 10 percent of disk access time, which indicatesthat we need more RAM.MemoryPool Nonpaged BytesThreshold: Watch the value of MemoryPool Nonpaged Bytes for an increase of 10percent or more from its value at system startup.Significance: If there is an increase of 10 percent or more from its value at startup, aserious leak is potentially developing.
  • Memory26ServerPool Nonpaged FailuresThreshold: Regular nonzero values indicate a bottleneck.Significance: This counter indicates the number of times allocations from thenonpaged pool have failed. It indicates that the computer’s physical memory is toosmall. The nonpaged pool contains pages from a process’s virtual address space thatare not to be swapped out to the page file on disk, such as a process’ kernel objecttable. The availability of the nonpaged pool determines how many processes, threads,and other such objects can be created. When allocations from the nonpaged pool fail,this can be due to a memory leak in a process, particularly if processor usage has notincreased accordingly.ServerPool Paged FailuresThreshold: No specific value.Significance: This counter indicates the number of times allocations from the pagedpool have failed. This counter indicates that the computer’s physical memory or pagefile is too small.
  • Memory27ServerPool Nonpaged PeakThreshold: No specific value.Significance: This is the maximum number of bytes in the nonpaged pool that theserver has had in use at any one point. It indicates how much physical memory thecomputer should have. Because the nonpaged pool must be resident, and becausethere has to be some memory left over for other operations, we might quadruple it toget the actual physical memory we should have for the system.MemoryCache BytesThreshold: No specific value.Significance: Monitors the size of cache under different load conditions. This counterdisplays the size of the static files cache. By default, this counter uses approximately50 percent of available memory, but decreases if the available memory shrinks, whichaffects system performance.
  • Memory28MemoryCache Faults/secThreshold: No specific value.Significance: This counter indicates how often the operating system looks for data inthe file system cache but fails to find it. This value should be as low as possible. Thecache is independent of data location but is heavily dependent on data density withinthe set of pages. A high rate of cache faults can indicate insufficient memory or couldalso denote poorly localized data.CacheMDL Read Hits %Threshold: The higher this value, the better the performance of the file system cache.Values should preferably be as close to 100 percent as possible. Significance: Thiscounter provides the percentage of Memory Descriptor List (MDL) Read requests tothe file system cache, where the cache returns the object directly rather thanrequiring a read from the hard disk.
  • Disk I/O29PhysicalDiskAvg. Disk Queue LengthThreshold: Should not be higher than the number of spindles plus two. Significance:This counter indicates the average number of both read and writes requests that werequeued for the selected disk during the sample interval.PhysicalDiskAvg. Disk Read Queue LengthThreshold: Should be less than two.Significance: This counter indicates the average number of read requests that werequeued for the selected disk during the sample interval.PhysicalDiskAvg. Disk Write Queue LengthThreshold: Should be less than two.Significance: This counter indicates the average number of write requests that werequeued for the selected disk during the sample interval.
  • Disk I/O30PhysicalDiskAvg. Disk sec/TransferThreshold: Should not be more than 18 milliseconds.Significance: This counter indicates the time, in seconds, of the average disk transfer.This may indicate a large amount of disk fragmentation, slow disks, or disk failures.Multiply the values of the Physical DiskAvg. Disk sec/Transfer andMemoryPages/sec counters. If the product of these counters exceeds 0.1, paging istaking more than 10 percent of disk access time, so we need more RAM.
  • Network I/O31Network InterfaceBytes Total/secThreshold: Sustained values of more than 80 percent of network bandwidth.Significance: This counter indicates the rate at which bytes are sent and received overeach network adapter. This counter helps us know whether the traffic at our networkadapter is saturated and if we need to add another network adapter. How quickly wecan identify a problem depends on the type of network we have as well as whetherwe share bandwidth with other applications.ServerBytes Total/secThreshold: Value should not be more than 50 percent of network capacity.Significance: This counter indicates the number of bytes sent and received over thenetwork. Higher values indicate network bandwidth as the bottleneck. If the sum ofBytes Total/sec for all servers is roughly equal to the maximum transfer rates of ournetwork, We may need to segment the network.
  • .NET Framework Technologies32The .NET Framework provides a series of performance counters, which we canmonitor using System Monitor and other monitoring tools. To measure other aspectsof .NET application performance, we need to add instrumentation to our applications.
  • CLR and Managed Code33What to MeasureWhen measuring the processes running under CLR some of the key points to look forare as follows:MemoryMeasure managed and unmanaged memory consumption.Working setMeasure the overall size of Our application’s working set.ExceptionsMeasure the effect of exceptions on performance.ContentionMeasure the effect of contention on performance.ThreadingMeasure the efficiency of threading operations.Code access securityMeasure the effect of code access security checks on performance.
  • CLR Memory34ProcessPrivate BytesThreshold: The threshold depends on our application and on settings in the Machineconfig file. The default for ASP.NET is 60 percent available physical RAM or 800 MB,whichever is the minimum. Note that .NET Framework 1.1 supports 1,800 MB as theupper bound instead of 800 MB if We add a /3GB switch in Our Boot.ini file. This isbecause the .NET Framework is able to support 3 GB virtual address space instead ofthe 2 GB for the earlier versions.Significance: This counter indicates the current number of bytes allocated to thisprocess that cannot be shared with other processes. This counter is used foridentifying memory leaks.
  • CLR Memory35.NET CLR Memory% Time in GCThreshold: This counter should average about 5 percent for most applications whenthe CPU is 70 percent busy, with occasional peaks. As the CPU load increases, so doesthe percentage of time spent performing garbage collection. Keep this in mind whenwe measure the CPU.Significance: This counter indicates the percentage of elapsed time spent performing agarbage collection since the last garbage collection cycle. The most common cause ofa high value is making too many allocations, which may be the case if we are allocatingon a per-request basis for ASP.NET applications. we need to study the allocationprofile for Our application if this counter shows a higher value.Private Bytes - # Bytes in all HeapsThis is the number of bytes allocated for unmanaged objects. This counter reflects thememory usage by managed resources.
  • CLR Memory36.NET CLR Memory# Bytes in all HeapsThreshold: No specific value.Significance: This counter is the sum of four other counters — Gen 0 Heap Size, Gen 1Heap Size, Gen 2 Heap Size, and Large Object Heap Size. The value of this counter willalways be less than the value of ProcessPrivate Bytes, which also includes the nativememory allocated for the process by the operating system..NET CLR Memory# Gen 0 CollectionsThreshold: No specific value.Significance: This counter indicates the number of times the generation 0 objects aregarbage-collected from the start of the application. Objects that survive the collectionare promoted to Generation 1. We can observe the memory allocation pattern of Ourapplication by plotting the values of this counter over time.
  • CLR Memory37.NET CLR Memory# Gen 1 CollectionsThreshold: One-tenth the value of # Gen 0 Collections.Significance: This counter indicates the number of times the generation 1 objects aregarbage-collected from the start of the application..NET CLR Memory# Gen 2 CollectionsThreshold: One-tenth the value of # Gen 1 Collections.Significance: This counter indicates the number of times the generation 2 objects aregarbage-collected from the start of the application. The generation 2 heap is thecostliest to maintain for an application. Whenever there is a generation 2 collection, itsuspends all the application threads. We should profile the allocation pattern for Ourapplication and minimize the objects in generation 2 heap.
  • CLR Memory38.NET CLR Memory# of Pinned ObjectsThreshold: No specific value.Significance: When .NET-based applications use unmanaged code, these objects arepinned in memory. That is, they cannot move around because the pointers to themwould become invalid. These can be measured by this counter. We can also pin objectsexplicitly in managed code, such as reusable buffers used for I/O calls. Too manypinned objects affect the performance of the garbage collector because they restrictits ability to move objects and organize memory efficiently..NET CLR MemoryLarge Object Heap SizeThreshold: No specific values. Significance: The large object heap size shows theamount of memory consumed by objects whose size is greater than 85 KB. If thedifference between # Bytes in All Heaps and Large Object Heap Size is small, most ofthe memory is being used up by large objects. The large object heap cannot becompacted after collection and may become heavily fragmented over a period of time.We should investigate Our memory allocation profile if We see large numbers here.
  • Working Set39ProcessWorking SetThreshold: No specific value.Significance: The working set is the set of memory pages currently loaded in RAM. Ifthe system has sufficient memory, it can maintain enough space in the working set sothat it does not need to perform the disk operations. However, if there is insufficientmemory, the system tries to reduce the working set by taking away the memory fromthe processes which results in an increase in page faults. When the rate of page faultsrises, the system tries to increase the working set of the process. If We observe widefluctuations in the working set, it might indicate a memory shortage. Higher values inthe working set may also be due to multiple assemblies in our application. We canimprove the working set by using assemblies shared in the global assembly cache.
  • Exceptions40.NET CLR Exceptions# of Exceps Thrown / secThreshold: This counter value should be less than 5 percent of Request/sec for theASP.NET application. If We see more than 1 request in 20 throw an exception, weshould pay closer attention to it. Significance: This counter indicates the total numberof exceptions generated per second in managed code. Exceptions are very costly andcan severely degrade our application performance. We should investigate our code forapplication logic that uses exceptions for normal processing behavior.Response.Redirect, Server.Transfer, and Response.End all cause aThreadAbortException in ASP.NET applications.
  • Contention41.NET CLR LocksAndThreadsContention Rate / secThreshold: No specific value.Significance: This counter displays the rate at which the runtime attempts to acquire amanaged lock but without a success. Sustained nonzero values may be a cause ofconcern. We may want to run dedicated tests for a particular piece of code to identifythe contention rate for the particular code path.NET CLR LocksAndThreadsCurrent Queue LengthThreshold: No specific value.Significance: This counter displays the last recorded number of threads currentlywaiting to acquire a managed lock in an application. We may want to run dedicatedtests for a particular piece of code to identify the average queue length for theparticular code path. This helps We identify inefficient synchronization mechanisms.
  • Threading42.NET CLR LocksAndThreads# of current physical ThreadsThreshold: No specific value.Significance: This counter indicates the number of native operating system threadscurrently owned by the CLR that act as underlying threads for .NET thread objects.This gives us the idea of how many threads are actually spawned by our application.This counter can be monitored along with SystemContext Switches/sec. A high rateof context switches almost certainly means that we have spawned a higher thanoptimal number of threads for our process. If we want to analyze which threads arecausing the context switches, We can analyze the ThreadContext Swtiches/seccounter for all threads in a process and then make a dump of the process stack toidentify the actual threads by comparing the thread IDs from the test data with theinformation available from the dump.
  • Threading43Thread% Processor TimeThreshold: No specific value.Significance: This counter gives us the idea as to which thread is actually taking themaximum processor time. If we see idle CPU and low throughput, threads could bewaiting or deadlocked. We can take a stack dump of the process and compare thethread IDs from test data with the dump information to identify threads that arewaiting or blocked.ThreadContext Switches/secThreshold: No specific value.Significance: The counter needs to be investigated when the SystemContextSwitches/sec counter shows a high value. The counter helps in identifying whichthreads are actually causing the high context switching rates.ThreadThread StateThreshold: The counter tells the state of a particular thread at a given instance.Significance: We need to monitor this counter when we fear that a particular thread isconsuming most of the processor resources.
  • Code Access Security44.NET CLR SecurityTotal RunTime ChecksThreshold: No specific value.Significance: This counter displays the total number of runtime code access securitychecks performed since the start of the application. This counter used together withthe Stack Walk Depth counter is indicative of the performance penalty that our codeincurs for security checks..NET CLR SecurityStack Walk DepthThreshold: No specific value.Significance: This counter displays the depth of the stack during that last runtime codeaccess security check. This counter is not an average. It just displays the last observedvalue.
  • ASP.net45To effectively determine ASP.NET performance, We need to measure the following:Throughput. This includes the number of requests executed per second andthroughput related bottlenecks, such as the number of requests waiting to beexecuted and the number of requests being rejected.Cost of throughput. This includes the cost of processor, memory, disk I/O, andnetwork utilization.Queues. This includes the queue levels for the worker process and for each virtualdirectory hosting a .NET Web application.Response time and latency. The response time is measured at the client as theamount of time between the initial request and the response to the client (first byteor last byte). Latency generally includes server execution time and the time taken forthe request and response to be sent across the network.
  • ASP.net46Cache utilization. This includes the ratio of cache hits to cache misses. It needs to beseen in larger context because the virtual memory utilization may affect the cacheperformance.Errors and exceptions. This includes numbers of errors and exceptions generated.Sessions. We need to be able to determine the optimum value for session timeoutand the cost of storing session data locally versus remotely. We also need todetermine the session size for a single user.Loading. This includes the number of assemblies and application domains loaded, andthe amount of committed virtual memory consumed by the application.View state size. This includes the amount of view state per page.Page size. This includes the size of individual pages.Page cost. This includes the processing effort required to serve pages.Worker process restarts. This includes the number of times the ASP.NET workerprocess recycles.
  • ASP.net counters mapped47
  • ASP.net Throughput48To measure ASP.NET throughput, use the following counters:ASP.NET ApplicationsRequests/SecThreshold: Depends on Our business requirements.Significance: The throughput of the ASP.NET application on the server. It is one theprimary indicators that help We measure the cost of deploying Our system at thenecessary capacity.Web ServiceISAPI Extension Requests/secThreshold: Depends on Our business requirements. Significance: The rate of ISAPIextension requests that are simultaneously being processed by the Web service. Thiscounter is not affected by the ASP.NET worker process restart count, although theASP.NET ApplicationsRequests/Sec counter is.To measure the throughput cost in terms of the amount of system resources that ourrequests consume, we need to measure CPU utilization, memory consumption, andthe amount of disk and network I/O. This also helps in measuring the cost of thehardware needed to achieve a given level of performance
  • ASP.net – Throughput – Requests49ASP.NETRequests CurrentThreshold: No specific value.Significance: The number of requests currently handled by the ASP.NET ISAPI. Thisincludes those that are queued, executing, or waiting to be written to the client.ASP.NET begins to reject requests when this counter exceeds the requestQueueLimitdefined in the processModel configuration section. If ASP.NETRequests Current isgreater than zero and no responses have been received from the ASP.NET workerprocess for a duration greater than the limit specified by <processModelresponseDeadlockInterval=/>, the process is terminated and a new process is started.ASP.NET ApplicationsRequests ExecutingThreshold: No specific value. Significance: The number of requests currently executing.This counter is incremented when the HttpRuntime begins to process the request andis decremented after the HttpRuntime finishes the request.
  • ASP.net – Throughput – Requests50ASP.NET Applications Requests Timed OutThreshold: No specific value.Significance: The number of requests that have timed out. We need to investigate thecause of request timeouts. One possible reason is contention between variousthreads. A good instrumentation strategy helps capture the problem in the log. Toinvestigate further, We can debug and analyze the process using a run-time debuggersuch as WinDbg
  • ASP.net – Throughput – Queues51ASP.NET Requests QueuedThreshold: No specific value.Significance: The number of requests currently queued. Queued requests indicate ashortage of I/O threads in IIS 5.0. In IIS 6.0, this indicates a shortage of worker threads.Requests are rejected when ASP.NETRequests Current exceeds therequestQueueLimit (default = 5000) attribute for <processModel> element defined inthe Machine.config file. This can happen when the server is under very heavy load.The queue between IIS and ASP.NET is a named pipe through which the request is sentfrom one process to the other. In IIS 5.0, this queue is between the IIS process(Inetinfo.exe) and the ASP.NET worker process (Aspnet_wp.exe.) In addition to theworker process queue there are separate queues for each virtual directory (applicationdomain.) When running in IIS 6.0, there is a queue where requests are posted to themanaged thread pool from native code. There is also a queue for each virtualdirectory. We should investigate the ASP.NET ApplicationsRequests In ApplicationQueue and ASP.NETRequests Queued to investigate performance issues.
  • ASP.net – Throughput – Queues52ASP.NET Applications Requests In Application QueueThreshold: No specific value.Significance: There is a separate queue that is maintained for each virtual directory.The limit for this queue is defined by the appRequestQueueLimit attribute for<httpRunTime> element in Machine.config. When the queue limit is reached therequest is rejected with a “Server too busy” error.ASP.NET Requests RejectedThreshold: No specific value.Significance: The number of requests rejected because the request queue was full.ASP.NET worker process starts rejecting requests when ASP.NETRequests Currentexceeds the requestQueueLimit defined in the processModel configuration section.The default value for requestQueueLimit is 5000.
  • ASP.net – Throughput – Queues53ASP.NET Requests Wait TimeThreshold: 1,000 milliseconds. The average request should be close to zeromilliseconds waiting in queue.Significance: The number of milliseconds the most recent request was waiting in thenamed pipe queue between the IIS and the ASP.NET worker process. This does notinclude any time spent in the queue for a virtual directory hosting the Webapplication.
  • ASP.net - Latency54ASP.NETRequest Execution TimeThreshold: The value is based on our business requirements.Significance: This is the number of milliseconds taken to execute the last request. Theexecution time begins when the HttpContext for the request is created, and stopsbefore the response is sent to IIS. Assuming that user code does not callHttpResponse.Flush, this implies that execution time stops before sending any bytesto IIS, or to the client.
  • ASP.net - Cache Utilization55ASP.NET ApplicationsCache Total EntriesThreshold: No specific value.Significance: The current number of entries in the cache which includes both user andinternal entries. ASP.NET uses the cache to store objects that are expensive to create,including configuration objects and preserved assembly entries.ASP.NET ApplicationsCache Total Hit RatioThreshold: With sufficient RAM, We should normally record a high (greater than 80percent) cache hit ratio. Significance: This counter shows the ratio for the totalnumber of internal and user hits on the cache.ASP.NET ApplicationsCache Total Turnover RateThreshold: No specific value.Significance: The number of additions and removals to and from the cache per second(both user and internal.) A high turnover rate indicates that items are being quicklyadded and removed, which can impact performance.
  • ASP.net - Cache Utilization56ASP.NET ApplicationsCache API Hit RatioThreshold: No specific value.Significance: Ratio of cache hits to misses of objects called from user code. A low ratiocan indicate inefficient use of caching techniques.ASP.NET ApplicationsCache API Turnover RateThreshold: No specific value.Significance: The number of additions and removals to and from the output cache persecond. A high turnover rate indicates that items are being quickly added andremoved, which can impact performance.
  • ASP.net - Cache Utilization57ASP.NET ApplicationsOutput Cache Hit RatioThreshold: No specific value.Significance: The total hit-to-miss ratio of output cache requests.ASP.NET ApplicationsOutput Cache Turnover Rate Threshold: No specific value.Significance: The number of additions and removals to the output cache per second. Ahigh turnover rate indicates that items are being quickly added and removed, whichcan impact performance.ASP.NET ApplicationsOutput Cache EntriesThreshold: No specific value.Significance: The number of entries in the output cache. We need to measure theASP.NET ApplicationsOutput Cache Hit Ratio counter to verify the hit rate to thecache entries. If the hit rate is low, We need to identify the cache entries andreconsider Our caching mechanism.
  • ASP.net Errors and Exceptions58ASP.NET Applications Errors Total/secThreshold: No specific value.Significance: The total number of exceptions generated during preprocessing, parsing,compilation, and run-time processing of a request. A high value can severely affectOur application performance. This may render all other results invalid.ASP.NET Applications Errors During ExecutionThreshold: No specific value. Significance: The total number of errors that haveoccurred during the processing of requests.ASP.NET Applications Errors Unhandled During Execution/sec Threshold: No specificvalue. Significance: The total number of unhandled exceptions per second at run time.
  • ASP.net Sessions59To measure session performance, We need to be able to determine the optimumvalue for session timeout, and the cost of storing session state in process and out ofprocess.Determining an Optimum Value for Session Timeout Setting an optimum value forsession timeout is important because sessions continue to consume server resourceseven if the user is no longer browsing the site. If we fail to optimize session timeouts,this can cause increased memory consumption.
  • ASP.net Sessions60To determine the optimum session timeout Identify the average session length for an average user based on the workloadmodel for our application. Set the session duration in IIS for Our Web site to a value slightly greater than theaverage session length. The optimum setting is a balance between conservingresources and maintaining the users session. Identify the metrics we need to monitor during load testing Run load tests with the simultaneous number of users set to the value identified forour workload model. The duration of the test should be more than the valueconfigured for the session timeout in IIS. Repeat the load test with the same number of simultaneous users each time. Foreach of the iterations, increase the value for the session timeout by a smallamount. For each of the iterations, We should observe the time interval where there is anoverlap of existing users who have completed their work but who still have anactive session on the server and a new set of users who have active sessionsincreases. We should observe increased memory consumption.
  • ASP.net Sessions61 As soon as the sessions for the old set of users time out, memory consumption isreduced. The height of the spikes in memory utilization depends on the amount ofdata being stored in the session store on a per-user basis. Continue to increase the timeout until the spikes tend to smooth out and stabilize. Set Our session state to be as small as possible while still avoiding the memoryutilization spikes. Set a value for the session timeout that is well above this limit.
  • ASP.net Loading62.NET CLR Loading Current appdomainsThreshold: The value should be same as number of Web applications plus one.The additional one is the default application domain loaded by the ASP.NET workerprocess. Significance: The current number of application domains loaded in theprocess..NET CLR Loading Current AssembliesThreshold: No specific value.Significance: The current number of assemblies loaded in the process. ASP.NET Webpages (.aspx files) and user controls (.ascx files) are “batch compiled” by default, whichtypically results in one to three assemblies, depending on the number ofdependencies. Excessive memory consumption may be caused by an unusually highnumber of loaded assemblies. We should try to minimize the number of Web pagesand user controls without compromising the efficiency of workflow. Assembliescannot be unloaded from an application domain. To prevent excessive memoryconsumption, the application domain is unloaded when the number of recompilations(.aspx, .ascx, .asax) exceeds the limit specified by<compilationnumRecompilesBeforeAppRestart=/>.
  • ASP.net Loading63.NET CLR Loading Bytes in Loader HeapThreshold: No specific value.Significance: This counter displays the current size (in bytes) of committed memoryacross all application domains. Committed memory is the physical memory for whichspace has been reserved in the paging file on disk.
  • ViewState64View state can constitute a significant portion of Our Web page output, particularly ifWe are using large controls such as the DataGrid or a tree control. It is important tomeasure the size of view state because this can have a big impact on the size of theoverall response and on the response time. We can measure view state by enablingpage level tracing. To enable tracing for a page, add the Trace attribute and set itsvalue to true as shown in the following code. <%@ Page Language="C#" Trace="True"%>View state is used primarily by server controls to retain state only on pages that postdata back to themselves. The information is passed to the client and read back in aspecific hidden variable called _VIEWSTATE. ASP.NET makes it easy to store any typesthat are serializable in view state. However, this capability can easily be misused andperformance reduced. View state is an unnecessary overhead for pages that do notneed it. As the view state grows larger. it affects performance in the following ways: Increased CPU cycles are required to serialize and to deserialize the view state. Pages take longer to download because they are larger. Very large view state can impact the efficiency of garbage collection.
  • ViewState65Transmitting a huge amount of view state can significantly affect applicationperformance. The change in performance becomes more marked when our Webclients use slow, dial-up connections. Consider testing for different bandwidthconditions when We work with view state. Optimize the way our application uses viewstate by following these recommendations: Disable view state if We do not need it. Minimize the number of objects We store in view state. Determine the size of Our view state.View state is turned on in ASP.NET by default. Disable view state if We do not need it.For example, We might not need view state because Our page is output-only orbecause We explicitly reload data for each request
  • ViewState66We do not need view state when the following conditions are true:Our page does not post back. If the page does not post information back to itself, ifthe page is only used for output, and if the page does not rely on response processing,We do not need view state.We do not handle server control events. If our server controls do not handle events,and if our server controls have no dynamic or data bound property values, or they areset in code on every request, we do not need view state.We repopulate controls with every page refresh. If we ignore old data, and if werepopulate the server control each time the page is refreshed, we do not need viewstate.
  • ViewState67There are several ways to disable view state at various levels:To disable view state for all applications on a Web server, configure the <pages>element in the Machine.config file as follows.<pages enableViewState="false" />This approach allows us to selectively enable view state just for those pages that needit by using the EnableViewState attribute of the @ Page directive. To disable viewstate for a single page, use the @ Page directive as follows.<%@ Page EnableViewState="false" %>To disable view state for a single control on a page, set the EnableViewState propertyof the control to false, as shown in the following code fragment.//programaticallyyourControl.EnableViewState = false;//something<asp:datagrid EnableViewState="false" runat= "server" />
  • ViewState68Minimize the Number of Objects We Store In View StateAs We increase the number of objects We put into view state, the size of our viewstate dictionary grows, and the processing time required to serialize and to deserializethe objects increases.Use the following guidelines when We put objects into view state:View state is optimized for serializing basic types such as strings, integers, andBooleans, and objects such as arrays, ArrayLists, and Hashtables if they contain thesebasic types.When We want to store a type which is not listed previously, ASP.NET internally triesto use the associated type converter. If it cannot find one, it uses the relativelyexpensive binary serializer.The size of the object is directly proportional to the size of the view state. Avoidstoring large objects
  • ViewState69Determine the Size of Our View StateBy enabling tracing for the page, We can monitor the view state size for each control.The view state size for each control appears in the leftmost column in the control treesection of the trace output. Use this information as a guide to determine if there areany controls that We can reduce the amount of view state for or if there are controlsthat We can disable view state for.
  • Questions?70 Q&A
  • Thank You71