Measuring web performance. Velocity EU 2011

  • 3,253 views
Uploaded on

How to measure web performance covering different techniques, the important of context, real user measurement and synthetic etc

How to measure web performance covering different techniques, the important of context, real user measurement and synthetic etc

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
3,253
On Slideshare
0
From Embeds
0
Number of Embeds
4

Actions

Shares
Downloads
104
Comments
0
Likes
3

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Good Afternoon. My name is Stephen Thair and I am a freelance webops manager and performance specialist based in London, UK. I am also the organiser for the London Web Performance Meetup community. My topic today is “measuring web performance” and before we drill down into the specifics of measuring web performance I have one piece of bad news… <click>
  • And not just wrong because of esoteric stuff like the observer effect, or even the accuracy of our measuring tools… it’s wrong because of one major reason… <click>
  • And that reason is the human brain… The human brain does not have a metronomic clock in tick tocking away to a regular beat like the clocks we use to “measure” web performance… <click>
  • The key here is “subjective” and “variable” – there is a lot of stuff that the “numbers” won’t and CAN’T tell you about how the user perceives the performance of your website… Subjective – because YOUR experience is not MY experience!!!And when we say that performance is variable what do we mean… Well, we’ve all heard about time “slowing down” under the effects of adrenaline in emergencies… so perhaps if we are visiting a website that particularly gets the adrenaline flowing (ahem) our perception of time might “slow down” and what is in reality a “fast” website might appear slow. Conversely, there is a psychological state called “flow” where we can “lose track of time” because we are involved in a task, perhaps playing an online game, where suddenly we find an hour or two has gone past without us being aware of it. But our perception of performance is variable on other ways, too <click> Different for different sites – for different users (Age, Gender, emotional state (“Is the train about to leave, I’m running late”?), culture, level of experience) – at different stages in the user journey (e.g Navigation browser vs Search vs checkout) Different devices? – Mobile vs wireless vs wired?
  • Actual = what your “numbers” say it is…Expected = what your user wanted it to be… for your website… at this moment in time… which is to say expectations are not fixed & immutable!Perceived = How long the user “thought it took” with their subjective and variable perception of time…Remembered = What they told their friends down the pub about your crap & slow (or awesome&fast) website! Stoyan’s Talk at Velocity “The Psychology of Performance” – highly recommended… So… <click> http://velocityconf.com/velocity2010/public/schedule/detail/13019"Satisfaction = perception minus expectation" - David Maister
  • So we have talked about the <click> “Subjective” nature of web performance but our challenge as developers, testers and WebOps is to devise ways to make the subjective…. <click> Objective… and measure it! So how can we do that? Well, science has been struggling with this problem of “subjective” and “objective” for centuries and have developed different techniques to apply to each… <click>
  • To look at subjective data we use <click> Qualitative Techniques which are commonly used in the Social Sciences…. <click> case studies, focus groups, interviews <click> etc. If some of these sound familiar that’s because Many of these are the kinds of tools that people from the User Experience world use in their UE labs… And it’s worth making the point that you can start “measuring performance” very early on in the software development lifecycle, even with paper-based or simple static HTML click models by seeing how long people take to choose, decide, navigate… or even simply how many “clicks” they take to achieve a given task (less clicks = “faster performance”). And that’s all I am going to say on the qualitative side of things because as Web professionals we generally prefer to look at “objective” measures… <click>
  • And objectively normally means Quantitatively – means we can use NUMBERS…. And bring our statistical tools to bear… And there are 7 techniques of “HOW” to measure website performance But before we dive into the “HOW” we need to talk about some other things <click>
  • And those other things are what Rudyard Kipling called his “honest serving-men” – the what, why, when etc.So firstly, in terms of “what” do you want to measure do you care about <click> “objects”
  • About “objects”… or pages… or the entire user journey? Even if we are talking about pages we have multiple metrics to chose from… <click>
  • At a page and object level you have multiple metrics you can choose from… but I generally find that there are 4 I really care about…
  • So what are they <click>TTFB – how fast is my back-end responding? RenderStart – when does the user start to get visual feedback from the page – remember, it’s about Perception… but it has got to be meaningful… e.g. not just a CSS background changeDOMContentLoaded – How soon can my developers start hooking up their fancy Javascript stuff to the DOM?Page (onLoad) – when have all the elements on the page been loaded (and I can start all my deferred resource loading via Ajax!). One “new” metrics you might have heard about it “above the fold time”… <click>
  • AFT is basically designed to be a “render complete” timing, or at least a “render of the static stuff in the viewable area of the page”AFT is a nice idea… but it’s implementation is troublesome at present… 4 mins to calculate at present… But for most sites AFT = PLT… and but according to Pat from Webpagetest he has seen it range from ½ PLT to 2x PLT… Personally, I really like using the screen capture videos for this and look at it in comparison to previous versions, competitors etc…
  • This sort of video comparison that you can create with webpagetest.org… but when you rely on human judgement you are back into the subjective, again…So what other metrics might we be interested in? <click>
  • We start with the raw metrics… and then move up into counts (which we normally show as histograms) into the statistical measures and finally into artificial summary metrics like Apdex (which I will explain in a second) All of this data can be sliced and diced in your datawarehouse… but keep in mind that you can easily run into gigabytes and terabytes of data for a high-volume website in a month… so plan carefully! Ok, so back to Apdex - what is apdex <click> ApdexCalculated “summary” metricsStatisticalMetricsMeanModeMedianStdDevCounts/HistogramsHistogramsRaw MetricsConnection TimeRender Start TimePage Load Time“Above the Fold” time etc
  • Apdex is simple – split the page load time for every visit to your website into 3 bucketsSatisfiedToleratedFrustratedAnd you get a score from 0 – 1 that represents an “overall” measure of your site’s performance, across all URL’s, across all the visits during the time period. So why do we need a “single number” metric like Apdex? <click>
  • Because web performance is multi-dimensional…. Multiple Metrics For Multiple URLS From Different (measurement)Locations Using Different Tools Across the (software)Lifecycle Over TimeAnd it gives you a great number to stick on the plasma screen in the Ops area and a nice number to stick on your weekly report to your boss…But beyond just these metrics on how long a page took to load there is something else we need to record… <click> and that is “context”…
  • “Context” is the metadata about the “numbers” we have collected. They are the key to EXPLAINING why the performance number recorded is “good or bad”…
  • <click> “context” is the metadata about the measurement you made… what browser, from what geographical location, over what type of network etc etc. Without context your performance data is meaningless…
  • Context helps us answer this question!!! We can see that the mode of the page load time is about 0.9sec but what about this cluster out at around 2.7 and 2.9 secs? Maybe they are from a group of customers in one location, or using an older browser etc… But back to our 6 honest serving men and let’s look at who and when… <click>
  • We want to measure performance across the lifecycle (SDLC) and different teams will need to use different techniques to get the different metrics they need… We’ll talk more about this as we go through each technique…
  • Where you chose to measure the your web performance depends on your objectives… what exactly are you trying to measure, at what stage in the lifecycle, synthetic or real-user? The further we move away from the origin server the more network latency begins to dominate… and the more contextual factors come into play… and hence <click> the signal-to-noise ratio increases…But why have I drawn a distinction between “real users” and “synthetic agents” like monitoring or performance test agents? <click>
  • Well, because there is quite a debate raging out there at present on the future of web performance measurement… Synthetic = the active monitoring from Site Confidence / Keynote / Gomez / Pingdom we all know and love…Real-User = measuring the performance of real user visitors to your website using tools like Atomic Labs Pion, Triometric, Coradiant, Tealeaf etc.
  • A lot of people have strong opinions about whether we should be measuring “real-user” performance, or whether we should be synthetically making requests/transaction to test our website, regardless of whether those “synthetic requests” come from real-browsers or browser emulating agents. My view is that people who say either/or are missing the SCIENCE behind two different techniques… and we can look to the scientific method to help us conceptualise the difference between the two.
  • And science talks about two different quantitative techniques for gaining knowledge about the world, or in this case, web performance… and that is the “Observational Study” versus “Experiment” <click>
  • Both seek to detect a relationship – “what is making this page load slowly?”Create the difference… keeping everything else the same… controlling the experimental factors (as much as possible)For example, what happens when I measure with a different browser… but keeping everything else the same?
  • Observational Studies = Real-User monitoringWe can only measure what the OCCURS NATURALLY in the sample population. If no one visits that URL for a while, how will we know that it’s broken or slow? So what do I mean by “confound variables”? Wll, what I mean is <click> Context!
  • In “real-user” performance measurement the USER’s define the context… all the of the variables that might affect the number that you measure. So… how can I get some control back and reduce the number of confounding variables? Run an Experiment! <click>
  • <click> Experiment = Synthetic testing where we request the page we measure… <click> and hence we get to Design our “experiment” <click> We chose what to measure, from what location, over a fixed bandwidth, using a known agent/browser, with a known frequency (which means a stable sample size which is important for statistic when comparing means etc from a different URLs) <click> as we seek to control the confounding variables (as much as we can) to we get <click> a lower signal to noise ratio and hopefully get better at understanding the “root cause”… <click> I said “Hopefully”…  <click> So which one is “better” RUM or Synthetic, Observational Study or Experiment <click>
  • So which one is better? <click> It depends on what you are trying to achieve… what’s your role… what’s your goal. Personally, if I am going to be woken up a 3am with an alert saying there is a problem with my website I’d like to have a higher degree to confidence in the alert than just because some ISP is having problems and giving their users a slow connection… I want to be alert about problems I can DO SOMETHING ABOUT and as an Ops Mgr I will “design my experiments” accordingly… But what about a “hybrid model” <click> where we move from RUM to Synthetic?... <click>
  • So how would Pat’s idea work? <click> The RUM to detect changes out there “in the real world”… <click> then pass the URL to test via an API <click> to try and narrow down the signal/noise in…(note we might be calling an entire SET of regression tests here… But the goal is <click> to Move from Observation <click> by controlling the variables <click> to a well defined ExperimentBut don’t forget you can also go the other way… to make sure that your “experiment” even vaguely reflects “reality” by cross-checking your synthetic results with what’s out there in the real world… which is exactly what any scientist does when they create an experimental model… they make sure that it correlates with reality!Ok, so we’ve covered off the who,what, when, where etc, lets get back to the “HOW”… <click>
  • Which is not to say that we can’t measure subjective things… qualitatively…
  • There are basically 7 techniques used to measure web performance:Each one has it’s pros and cons… easy of use, what it can measure, cost etc <click> x 7 So which technique is best? Depends on what you want to measure, where etc… comparing them all together we get <click>
  • So let’s look at each one in turn and how it works (in a very simplified way!)…
  • For example, the following JavaScript shows a naive attempt to measure the time it takes to fully load a page:<html><head><script type="text/javascript">var start = new Date().getTime();function onLoad() {var now = new Date().getTime();var latency = now - start; alert("page loading time: " + latency);}</script></head><body onload="onLoad()"><!- Main page body goes from here. --></body></html>
  • You can do custom page instrumentationby wrapping critical sections of the page in start/stop timersBut it relies on Javascript and Cookies… which might be disabled or not available (especially in Mobile). Only accurate for 2nd page.
  • THE BROWSER is doing most of the timing for us… Brilliant!!! No more OnBeforeUnLoad event! It all occurs “after” the page has loaded…No more cookies… lots more metrics <click>
  • Many more metrics in the Navigation-Timing spec… at a PAGE level, at least…
  • Biggest pro is that the TIMING is mostly done by the browser… so it’s less intrusive and more accurate with a much better set of metrics… CON – browser support…A bit more about “SiteSpeed” <click>
  • Free Navigation-Timing Based real-user performance monitoring…Also uses timings from the Google Toolbar… which leads us nicely into the next technique <click>
  • Filtered to remove all measurements > 60 and samples > 2. Scale on the left is 3.5 second intervals
  • Here is your histogram, turned on it’s side. 0-1 (23%), 1-3 (45%), 3-7 (22.
  • Excellent metrics including object level metrics… so you can get that nice waterfall diagram we know and love!
  • Basically you are sticking a recording mechanism, a proxy debugger like Charles or Fiddler, between you and the origin web server… and that proxy will record all your requests and the timings associated with them…
  • YOU ARE NO LONGER IN THE CLIENT… so no RenderStart, no OnLoad Event, and hence the concept of a page gets “fuzzy”… particularly with AJAXy pages… How does the proxy affect your traffic – probably the biggest potential issue is how the proxy server connects to the origin server. There is no guarantee that it’s going to use the same number of connections or re-use them in the same way that your browser will… From 2005 EricLaw - http://insidehttp.blogspot.com/2005/06/using-fiddler-for-performance.htmlIn Fiddler 0.9 and below, Fiddler never reuses sockets for anything, which may dramatically affect the performance of your site. Fiddler 0.9.9 (the latest beta) offers server-socket reuse, so the connection from Fiddler to the server is reused. Note that the socket between your browser and Fiddler is not reused, but since this is a socket->socket connection on the same machine, there's not a significant performance hit for abandoning this socket.So, Fiddler isn't suitable for timing. But this doesn't impact your ability to check compression, conditional requests, Expires headers, bytes-transferred, etc. Other than the actual timings, the browser does not behave much differently with Fiddler than without (and chances are good that your visitors are using some type of proxy). The browser will often send Proxy-Connection: Keep-Alive; this isn't sent without a proxy.IE will send Pragma: no-cache if the user hits F5 or clicks the refresh button; without a proxy, you have to hit CTRL+F5 to send the No-Cache value.The fact that a client-socket is abandoned can lead to extra authentication roundtrips when using the NTLM connection-based authentication protocol.
  • Write a mod or filter… that can see every request… start/stop timers… send them to a collector…
  • Web Server mods/ISAPI filters are how most of the APM solutions work. AppDynamics, Dynatrace, New Relic are all in this space, and some of them have implemented the javascript timing as well… Great for measuring the performance of your web tier and backend… not that useful in measuring page level performance unless you go the hybrid approach.
  • SPAN port then sniff the traffic…. Re-assemble the packets then the requests then the “page” then record the data…
  • The network sniffing approach is really the only true “passive” technology out there i.e. one that doesn’t have any “observer effect” on the measurement. Pion, Coradiant, Tealeaf, TriometricThere is a Not cloud friendly since EC2 doesn’t allow promiscuous mode appliances…
  • So in summary…. <Click x 6>And before we go, a quick plug for my user group… <click>
  • A great WPO case study next week from “The Times” newspaper… and then in December we have a special Xmas event hosted by Betfair!

Transcript

  • 1. MEASURING WEB PERFORMANCE Steve Thair Seriti Consulting @TheOpsMgr
  • 2. Every measurement of web performance you will ever make will be wrong(C) SERITI CONSULTING, 2011 08/11/2011 2
  • 3. (C) SERITI CONSULTING, 2011 08/11/2011 3
  • 4. “The human perception of duration is both subjective and variable” http://en.wikipedia.org/wiki/Time_perception
  • 5. “PERCEPTION IS VARIABLE…” Go read Stoyan’s talk! http://velocityconf.com/velocity2010/public/schedule/detail/13019(C) SERITI CONSULTING, 2011 08/11/2011 5
  • 6. Web Performance Subjective Objective(C) SERITI CONSULTING, 2011 08/11/2011 6
  • 7. Case Studies Subjective Focus Groups “Qualitative Interviews techniques” Video Analysis Surveys(C) SERITI CONSULTING, 2011 08/11/2011 7
  • 8. Javascript Navigation timing Objective Browser Extensions “Quantitative Custom Browsers techniques” Proxy timings Web Server mods Network sniffing(C) SERITI CONSULTING, 2011 08/11/2011 8
  • 9. “I keep six honest serving-men (They taught me all I knew); Their names are What and Why and When And How and Where and Who.” Rudyard Kipling, The Elephant’s Tale(C) SERITI CONSULTING, 2011 08/11/2011 9
  • 10. WHAT LEVEL DO YOU MEASURE? Journey Page Object
  • 11. CHOOSE YOUR METRIC!https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/NavigationTiming/Overview.html (C) SERITI CONSULTING, 2011 08/11/2011 11
  • 12. 4 Key “Raw” Metrics • Time to First Byte (TTFB) • Render Start Time • DOMContentLoaded • Page (onLoad) Load Time (PLT)(C) SERITI CONSULTING, 2011 08/11/2011 12
  • 13. What about “Above the Fold” time? • How long to “render of the static stuff in the viewable area of the page”? Limitations of AFT – Only applicable to lab setting – Does not reflect user perceived latency based on functionality http://assets.en.oreilly.com/1/event/62/Above%20the%20Fold%20Time_%20Measuring%20Web%20Page%20Performance %20Visually%20Presentation.pdf(C) SERITI CONSULTING, 2011 08/11/2011 13
  • 14. (C) SERITI CONSULTING, 2011 08/11/2011 14
  • 15. WHAT OTHER METRICS? Apdex Statistical Metrics Counts/Histograms Raw Metrics
  • 16. Apdex (t) = (Satisfied Count + Tolerated Count / 2) / Total Samples • A number between 0 and 1 that represents “user satisfaction” • For technical reasons the “Tolerated” threshold is set to four times the “Satisfied” Threshold so if your “Satisfied” threshold (t) was 4 seconds then: • 0 to 4 seconds = Satisfied 4 to 16 seconds = Tolerated over 16 seconds = Frustrated. http://apdex.org/(C) SERITI CONSULTING, 2011 08/11/2011 16
  • 17. PERFORMANCE IS MULTI-DIMENSIONALMultiple MetricsFor Multiple URLSFrom Different LocationsUsing Different ToolsAcross the LifecycleOver Time(C) SERITI CONSULTING, 2011 08/11/2011 17
  • 18. The importance of CONTEXT(C) SERITI CONSULTING, 2011 08/11/2011 18
  • 19. Location Bandwidth Wired, WiFi, 3G Latency Operating Cached objects System Addons & Antivirus Extensions Browser Device Time of Day Context Resolution(C) SERITI CONSULTING, 2011 08/11/2011 19
  • 20. (C) SERITI CONSULTING, 2011 08/11/2011 20
  • 21. Who? When? User Experience Design (UX) Developers Prod Develop Testers Ops SDLC WebOps “The Boss” Build QA (CI)(C) SERITI CONSULTING, 2011 08/11/2011 21
  • 22. WHERE – DEPENDS ON THE HOW & WHY… Web Browser Proxy Server InternetSynthetic versus Real-User “Real User” Firewall / Synthetic Agent Load-Balancer Web Server (Reverse) Proxy Server SPAN port or Network tap WiFi or 3G Smartphone Signal/Noise Ratio increases…. Network “Sniffer” User/Browser metrics Server-based metrics (C) SERITI CONSULTING, 2011 08/11/2011 22
  • 23. The Synthetic Versus Real-User Debate(C) SERITI CONSULTING, 2011 08/11/2011 23
  • 24. “…its a question of when,“Because you’re skipping the “last mile” not if active monitoring of websites between the server and the user’s for availability and performance will browser, you’re not seeing how your be obsolete.” site actually performs in the real world” - Pat Meenan - Josh Bixby “You can have my active monitoring when you pry it from my cold, dead hands…” - Steve Thair http://blog.patrickmeenan.com/2011/05/demise-of-active-website-monitoring.html http://www.webperformancetoday.com/2011/07/05/web-performance-measurement-island-is-sinking/ http://www.seriticonsulting.com/blog/2011/5/21/you-can-have-my-active-monitoring-when-you-pry-it-from-my-co.html (C) SERITI CONSULTING, 2011 08/11/2011 24
  • 25. Observational Study Versus Experiment(C) SERITI CONSULTING, 2011 08/11/2011 25
  • 26. Experiment versus Observational Study • Both typically have the goal of detecting a relationship between the explanatory and response variables.Experiment • create differences in the explanatory variable and examine any resulting changes in the response variable (cause-and-effect conclusion)Observational Study • observe differences in the explanatory variable and notice any related differences in the response variable (association between variables) http://www.math.utah.edu/~joseph/Chapter_09.pdf(C) SERITI CONSULTING, 2011 08/11/2011 26
  • 27. Observational Study = Real-User• “Watching” what happens in a given population sample• We can only observe… and try to infer what is actually happening• Many “confounding variables”• High signal to noise• Correlation(C) SERITI CONSULTING, 2011 08/11/2011 27
  • 28. Location Bandwidth Wired, Latency WiFi, 3G Cached Operating objects System Addons & Antivirus Extensions Browser Device Time of Day Context Resolution(C) SERITI CONSULTING, 2011 08/11/2011 28
  • 29. Observational Study = Real-User Experiment = Synthetic• “Watching” what happens in a • We “design” our experiment given population sample • We chose when, where, what,• We can only observe… and try to how etc infer what is actually happening • We control the variables (as• Many “confounding variables” much as possible)• High signal to noise • Lower signal to noise• Correlation • Causation* * OK, real “root cause” analysis will probably take a lot more investigation, I admit… but you get closer!(C) SERITI CONSULTING, 2011 08/11/2011 29
  • 30. So which one is better? Neither. Complementary not Competing “…Ultimately Id love to see a hybrid model where synthetic tests are triggered based on somethingdetected in the data (slowdown, drop in volume, etc) to validate the issue or collect more data. - Pat Meenan(C) SERITI CONSULTING, 2011 08/11/2011 30
  • 31. API Call to SyntheticReal-User Monitoring Controlled Test and Use RUM as “Reality Check”detect a change in a compare to baseline.page’s performanceFrom Observation… By controlling the variables To Experiment… (C) SERITI CONSULTING, 2011 08/11/2011 31
  • 32. Javascript Back to the “How”… Navigation timing Objective Browser Extensions “Quantitative Custom Browsers techniques” Proxy timings Web Server mods Network sniffing(C) SERITI CONSULTING, 2011 08/11/2011 32
  • 33. 7 WAYS OF MEASURING WEBPERF1. JavaScript timing e.g. Souder’s Episodes or Yahoo! Boomerang*2. Navigation-Timing e.g GA SiteSpeed3. Browser Extension e.g. HTTPwatch4. Custom browser e.g. 3pmobile.com or (headless) PhantomJS.org5. Proxy timing e.g. Charles proxy6. Web Server Mod e.g. APM solutions7. Network sniffing e.g. Atomic Labs Pion(C) SERITI CONSULTING, 2011 08/11/2011 33
  • 34. COMPARING METHODS… Measurement Method Navigation- Browser Custom Proxy Web Server Network Metric JavaScript Timing API Extension Browser Debugger Mod sniffing Charles APM Example Product WebTuna SiteSpeed HTTPWatch 3PMobile Pion Proxy Modules "Blocked/Wait" No No Yes Yes Yes No No DNS No Yes Yes Yes Yes No No Connect No Yes Yes Yes Yes No Yes Time to First Byte Partially Yes Yes Yes Yes Yes Yes "Render Start" No No Yes Yes No No No DOMReady Partially Yes Yes Yes No No No "Page/HTTP Partially Yes Yes Yes Yes No Partially Complete" OnLoad Event Yes Yes Yes Yes No No No JS Execution Time Partially No Yes Yes No No No Page-Level Yes Yes Yes Yes Partially Partially Partially Object Level No No Yes Yes Yes Yes Yes Good for RUM? Yes Yes Partially No No Partially Yes Good for Mobile? Partially Partially Partially Partially Partially Partially Partially Affects Measurement Yes No Yes Yes Yes Yes No(C) SERITI CONSULTING, 2011 08/11/2011 34
  • 35. JAVASCRIPT TIMING – HOW IT WORKS unLoad Event var start = new Stick it in a Cookie Load the next page Date().getTime() PLT = onLoad Event Send a beacon var end = new beacon.gif?time=plt end - start Date().getTime() https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/NavigationTiming/Overview.html(C) SERITI CONSULTING, 2011 08/11/2011 35
  • 36. PROS & CONS OF JAVASCRIPT TIMING Metric JavaScript • Pro’s Example Product WebTuna • Simple "Blocked/Wait" No • Episodes/Boomerang provide custom timing for DNS No developer instrumentation Connect No Time to First Byte Partially • Cons "Render Start" No DOMReady Partially • Relies on Javascript and Cookies "Page/HTTP Partially Complete" • Only accurate for 2 nd page in journey OnLoad Event Yes JS Execution Time Partially • Can only really get a “page load metric” and a Page-Level Object Level Yes No partial TTFB metric Good for RUM? Good for Mobile? Yes Partially • “Observer effect” (and Javascript can break!)Affects Measurement Yes (C) SERITI CONSULTING, 2011 08/11/2011 36
  • 37. NAVIGATION-TIMING – HOW IT WORKS onLoad Event var plt = now - Send a beacon var end = new performance.timing. beacon.gif?time=pltDate().getTime() navigationStart;(C) SERITI CONSULTING, 2011 08/11/2011 37
  • 38. NAVIGATION TIMING METRICShttps://dvcs.w3.org/hg/webperf/raw-file/tip/specs/NavigationTiming/Overview.html (C) SERITI CONSULTING, 2011 08/11/2011 38
  • 39. PROS & CONS OF NAVIGATION-TIMING Metric Navigation- • Pro’s Timing API Example Product SiteSpeed • Even simpler! "Blocked/Wait" No • Lots more metrics DNS Yes Connect Yes • More accurate Time to First Byte Yes "Render Start" No • Cons DOMReady Yes "Page/HTTP • Need browser support for API Yes Complete" OnLoad Event Yes • IE9+ / Chrome 6+ / Firefox 7+ JS Execution Time No Page-Level Yes • Relies on Javascript (for querying API & beacon) Object Level No Good for RUM? Yes • “Observer effect” Good for Mobile? PartiallyAffects Measurement No • Page-level only (C) SERITI CONSULTING, 2011 08/11/2011 39
  • 40. A BIT MORE ABOUT GA SITESPEED…• Just add one line for basic, free, real-user monitoring! _gaq.push([_setAccount, UA-12345-1]); _gaq.push([_trackPageview]); _gaq.push([_trackPageLoadTime]);• Sampling appears to vary (a lot!) • 10% of page visits by design but reported 2% to 100%• Falls back to Google Toolbar if available (but NOT javascript timing)• Will probably make you think perf is better than it really is…(C) SERITI CONSULTING, 2011 08/11/2011 40
  • 41. (C) SERITI CONSULTING, 2011 08/11/2011 41
  • 42. (C) SERITI CONSULTING, 2011 08/11/2011 42
  • 43. (C) SERITI CONSULTING, 2011 08/11/2011 43
  • 44. BROWSER EXTENSION – HOW IT WORKS That subscribes to Write a browser Get your users to a whole lot of API extension… install it… event listeners… Send the timing back to collector E.g. showslow.com https://developer.mozilla.org/en/XPCOM_Interface_Reference(C) SERITI CONSULTING, 2011 08/11/2011 44
  • 45. PROS & CONS OF BROWSER EXTENSIONS Metric Browser • Pros Extension • Very complete metrics Example Product HTTPWatch "Blocked/Wait" Yes • Object and Page level DNS Connect Yes Yes • No javascript (in the page at least)!!! Time to First Byte Yes • Great for continuous integration perf testing "Render Start" Yes DOMReady Yes • Cons "Page/HTTP Complete" Yes • Getting users to install it… OnLoad Event Yes JS Execution Time Yes • Not natively cross-browser Page-Level Yes Object Level Yes • Some browsers don’t support extensions Good for RUM? Good for Mobile? Partially Partially • Especially mobile browsers!Affects Measurement Yes • “Observer effect” (C) SERITI CONSULTING, 2011 08/11/2011 45
  • 46. CUSTOM BROWSER – HOW IT WORKS Add custom Take some open Like WebKit or the instrumentation for source browser code Android Browser performance measurement Send the timing back Get users to to collector install it… E.g. 3pmobile.com(C) SERITI CONSULTING, 2011 08/11/2011 46
  • 47. PROS & CONS OF CUSTOM BROWSER Metric Custom • Pros Browser Example Product 3PMobile • Great when you can’t use extensions / javascript / cookies "Blocked/Wait" Yes ie. For mobile performance e.g. 3Pmobile.com DNS Yes Connect Yes • Great for automation e.g. http://www.PhantomJS.org/ Time to First Byte Yes "Render Start" Yes • Good metrics (depending on OS API availability) DOMReady Yes "Page/HTTP • Cons Yes Complete" OnLoad Event Yes • Requires installation JS Execution Time Yes Page-Level Yes • Maintaining fidelity to “real browser” measurements Object Level Yes Good for RUM? No • “Observer Effect” (due to instrumentation code) Good for Mobile? PartiallyAffects Measurement Yes (C) SERITI CONSULTING, 2011 08/11/2011 47
  • 48. PROXY DEBUGGER – HOW IT WORKS Change browser touse debugging Proxy Debugging proxy Export data to log e.g. Charles or records each request Fiddler(C) SERITI CONSULTING, 2011 08/11/2011 48
  • 49. PROS & CONS OF PROXY DEBUGGER Metric Proxy • Pros Debugger Example Product Fiddler • One simple change to browser config Proxy "Blocked/Wait" Yes • No Javascript / Cookies DNS Yes Connect Yes • Can offer bandwidth throttling Time to First Byte Yes "Render Start" No • Cons DOMReady "Page/HTTP No • Proxies significantly impact HTTP traffic Yes Complete" • http://insidehttp.blogspot.com/2005/06/using-fiddler-for- OnLoad Event No JS Execution Time No performance.html Page-Level Partially Object Level Yes • No access to browser events Good for RUM? No • Concept of a “page” be problematic… Good for Mobile? PartiallyAffects Measurement Yes (C) SERITI CONSULTING, 2011 08/11/2011 49
  • 50. 6 Keep-Alive connections per SERVER Versus 8 Keep-Alive connections TOTAL per PROXY (Firefox 7.0.1)(C) SERITI CONSULTING, 2011 08/11/2011 50
  • 51. WEB SERVER MOD – HOW IT WORKS Write a webserver Start a timer on Stop Timer on Mod or ISAPI filter Request Response Send the timing back to collector E.g. AppDynamics http://www.apachetutor.org/dev/request(C) SERITI CONSULTING, 2011 08/11/2011 51
  • 52. PROS & CONS OF WEB SERVER MOD Metric Web Server • Pros Mod APM • Great for Application Performance Management (APM) Example Product Modules • Can be used in a “hybrid mode” with Javascript timing "Blocked/Wait" No DNS No • Measuring your “back-end” performance Connect No Time to First Byte Yes • Can be easy to deploy* "Render Start" No DOMReady No • Cons "Page/HTTP Complete" No • Limited metrics, ignores network RTT and only sees origin OnLoad Event No requests JS Execution Time No • “Observer Effect” (~5% server perf hit with APM?) Page-Level Partially Object Level Yes • Concept of a “page” be problematic… Good for RUM? Partially Good for Mobile? Partially • Can be a pain to deploy*Affects Measurement Yes (C) SERITI CONSULTING, 2011 08/11/2011 52
  • 53. NETWORK SNIFFING – HOW IT WORKS Create a SPAN Promiscuous Assemble TCP/IP port or network mode packet packets into tap sniffing HTTP Requests Record the timing Assemble HTTP data in a Requests into database “pages”(C) SERITI CONSULTING, 2011 08/11/2011 53
  • 54. PROS & CONS OF NETWORK SNIFFING Metric Network • Pros sniffing • No “observer effect” (totally “passive”) Example Product Pion "Blocked/Wait" No • Very common “appliance-based” RUM solution DNS No Connect Yes • Can be used in a “hybrid mode” with Javascript timing Time to First Byte "Render Start" Yes No • Can be easy to deploy* DOMReady No • Cons "Page/HTTP Partially Complete" • Limited metrics and only sees origin requests OnLoad Event No JS Execution Time No • Not “cloud friendly” at present Page-Level Partially Object Level Yes • Concept of a “page” be problematic… Good for RUM? Yes Good for Mobile? Partially • Can be a pain to deploy*Affects Measurement No (C) SERITI CONSULTING, 2011 08/11/2011 54
  • 55. SUMMARY• Performance is subjective (but we try to make it objective)• Performance is Multi-dimensional• Context is critical• “Observational Studies AND Experiments”• Real User Monitoring AND Synthetic Monitoring• 7 different measurement techniques each with Pros & Cons(C) SERITI CONSULTING, 2011 08/11/2011 55
  • 56. @LDNWEBPERF USER GROUP!• Join our London Web Performance Meetup • http://www.meetup.com/London-Web-Performance-Group/• Next Wednesday 16 th Nov - 7pm – London (Bank) • WPO case study from www.thetimes.co.uk!• Follow us on Twitter @LDNWebPerf• #LDNWebPerf & #WebPerf(C) SERITI CONSULTING, 2011 08/11/2011 56
  • 57. QUESTIONS?http://mobro.co/TheOpsMgr(C) SERITI CONSULTING, 2011 08/11/2011 57