Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Practical Tips for Ops: End User Monitoring

Practical Tips for Ops: End User Monitoring

Watch replay here: https://info.dynatrace.com/apm_wc_devops_journey_series_end_user_monitoring_na_registration.html

Companies that have adopted DevOps Best Practices have 2555x faster lead times* in delivering new features to their end users. However, speed of delivery is not the only success metric! Success must also be measured on how end-users react to the speed of innovation.

Getting insights into how your end-users react to the changes you deploy allows you to share valuable feedback to the Dev and Biz teams. The teams can then see clearly how their changes impacted end-users and where fine tuning can improve infrastructure performance.

In this webcast Andreas Grabner, Chief DevOps Activist, and Brian Chandler, Systems Engineer, share practical tips that IT groups can start to implement quickly. You'll learn:
• Best approach for monitoring end-user mobile versus desktop versus tablet versus service end-points
• How to evaluate network bandwidth requirements by app, service and feature; to better understand and optimize resource consumption
• How to optimize your delivery chain in depth by understanding who is using your app, where, and on what device
• Clear view on which features are being used the most, the least, and what kind of behavior can be observed that is useful in tuning performance

If you are stuck in analysis paralysis, get insights that you can apply today!

*In addition, companies using DevOps are two times more likely to exceed profitability, market share and productivity goals (from the State of DevOps report by Puppet Labs 2016)

  • Login to see the comments

Practical Tips for Ops: End User Monitoring

  1. 1. Andreas Grabner Chief DevOps Activist @ Dynatrace Twitter: @grabnerandi Brian Chandler Sales Engineer @ Dynatrace Twitter: @Channer531 Practical Tips for Ops: End User Monitoring The DevOps Journey Series Part 3
  2. 2. State of DevOps Report Adoption Metrics 200x 2,555x more frequent deployments faster lead times than their peers Dynatrace DevOps Adoption Metrics 12x More feature releases 170 Deployments / Day 93% Production bugs found before impacting end users
  3. 3. Interesting Ops Learnings from Adopters New Tech Stack and Architectures 3rd Party / CDN More Apps / Multi-Version „Twitter Driven“ Load Models
  4. 4. DevOps Requirements and Engagement Options for Ops Feedback through High Quality App & User Data Ops as a Service: “Self-Service for Application Teams” + Promote YOUR Monitoring through Shift-Left Bridge the Gap between Server Side and End User Shift-Left: (No)Ops as “Part of Application Delivery” RequirementsEngagementOptions
  5. 5. Basic App Monitoring1 App Dependencies2 End User Monitoring3 How to monitor mobile vs desktop vs tablet vs service endpoints? How much network bandwidth is required per app, service and feature? Where to start optimizing bandwidth: CDNs, Caching, Compression? Are our applications up and running? What load patterns do we have per application? What is the resource consumption per application? What are the dependencies between apps, services, DB and infra? How to monitor „non custom app“ tiers? Where are the dependency bottlenecks? Where is the weakest link? Closing the Ops to Dev Feedback Loop: One Step at a Time! “Soft-Launch” Support4 Virtualization Monitoring5 How to automatically monitor virtual and container instances? What to monitor when deploying into public or private clouds? How to deploy and monitor multiple versions of the same app / service? What and how to baseline? Do we have a better or worse version of an app/service/feature? Ops: Need answers to these questions! Closing the gap to AppBizDev Ready for “Cloud Native” How to alert on real problems and not architectural patterns? How to consolidate monitoring between Cloud Native and Enterprise? Who is using our apps? Geo? Device? Which features are used? Whats the behavior? Where to start optimizing? App Flow? Page Size? Conversion Rates? Bounce Rates? Where are the performance / resource hotspots? When and where do applications break? Do we have bad dependencies through code or config? How does the system really behave in production? What to learn for future architecturs? What are the usage patterns for A/B or Green/Blue? Difference between different versions and features? Does the architecture work in these dynamic enviornments? Does scale up/down work as expected? Provide „Monitoring as a Service“ for Cloud Native Application Teams6 Today
  6. 6. confidential How End User Monitoring Works!
  7. 7. Out-Side In Perspective: See your App from your users perspective 7 User Experience = Availability (Synthetic) + Performance, Errors & User Behavior (Real Users)
  8. 8. Every User, Every Click, Every App/Version
  9. 9. 9 Visibility into Visitors and Sessions! #1: Unique Visitors #2: All Sessions #3: Across all Apps #4: Full Details for each Session
  10. 10. 10 Seeing Every Single Step Along the Way! #2: Details for each User Action #4: User Experience
  11. 11. Optimize Performance to Impact Behavior #1: Performance Data #2: Behavior Data
  12. 12. Key User Experience Metrics Feedback #1: Who are they? #2: Bandwidth! #3: Response Time Breakdown #4: Conversions: Total & Rate #5: Client-Side Errors! #6: CPU / Memory #4: Conversions: Total & Rate #6: Key User Action(s)
  13. 13. Questions to answer! Efficiency: How to optimize end user experience, infrastructure & costs? Optimize Top vs Remove Flop Features! Analyze and optimize page load, network traffic and costs! Impact: Do we impact our end users experience? Is the issue in Content Delivery, Network or Server Side? Can users use our services? Crashes? Bad or Slow Responses? Mobile: as First Class Citizen! Usage feedback based on mobile versions & user experience Analyze crashes and optimize server-side resource usage
  14. 14. confidential Impact: Do we impact our end users experience? Is the issue in Content Delivery, Network or Server Side? Can users use our services? Crashes? Bad or Slow Responses?
  15. 15. 50,000 Foot View on User Experience Birds eye view of holistic user experience Green – Satisfied Yellow – Tolerating Red – Frustrated • Line chart represents volume • Market Open • 60 User Actions per second
  16. 16. 50,000 Foot View on User Experience
  17. 17. Focus on high value users and branches Visual recognition of a problem Popular dashboard template for execs 10,000 Foot View on User Experience
  18. 18. Hyperlyzer: Close-Up View
  19. 19. 20 Understanding user click path Analyze browser performance problems Recognize performance patterns within branches Ground-Level View
  20. 20. Automated Key User Experience Findings #1: Key WPO Findings #2: Actionable for Devs
  21. 21. Automated Comparison #1: Compare with previous Timeframe / Release #2: Actionable Diff-View for Devs
  22. 22. User Experience Green – Satisfied Yellow – Tolerating Red – Frustrated API Performance Green – Fast Yellow – Warning Red – Slow Purple – Error • Problem with mainframe (HPNS) • Major outage on proprietary web server • Notification of the problem at 5:30am Purple creeping death
  23. 23. Automated JavaScript Error Analysis
  24. 24. confidential Efficiency: How to optimize end user experience, infrastructure & costs? Optimize Top vs Remove Flop Features! Analyze and optimize page load, network traffic and costs!
  25. 25. Daily Traffic Pattern – bucketizing usage
  26. 26. Client Center sees a peak of about 3,800 Request/min against the it’s API. Daily Traffic Pattern – bucketizing usage
  27. 27. Client Center sees a peak of about 3,800 Request/min against the it’s API. 60 unique calls/functions that make up the Client Center API Daily Traffic Pattern – bucketizing usage
  28. 28. Client Center sees a peak of about 3,800 Request/min against the it’s API. 60 unique calls/functions that make up the Client Center API ~20% of that traffic is ClientCenter/API/Holdings Daily Traffic Pattern – bucketizing usage
  29. 29. Client Center sees a peak of about 3,800 Request/min against the it’s API. 60 unique calls/functions that make up the Client Center API ~20% of that traffic is ClientCenter/API/Holdings ~20% of that traffic is ClientCenter/API/ClientDetails Daily Traffic Pattern – bucketizing usage
  30. 30. Client Center sees a peak of about 3,800 Request/min against the it’s API. 60 unique calls/functions that make up the Client Center API ~20% of that traffic is ClientCenter/API/Holdings ~20% of that traffic is ClientCenter/API/ClientDetails ~20% of that traffic is ClientCenter/API/RecentSearch Daily Traffic Pattern – bucketizing usage
  31. 31. Auto-Detect Top/Flop User Actions
  32. 32. Auto-Detect Top/Flop User Actions
  33. 33. Auto-Detect Top/Flop User Actions #3: Backend Analysis
  34. 34. Automated Resource (DB) Usage Analysis
  35. 35. Feature Resource Analytics
  36. 36. Automated Resource Impact Analysis
  37. 37. 38 Automated CPU Consumption for User Actions
  38. 38. confidential Mobile: as First Class Citizen! Usage feedback based on mobile versions & user experience Analyze crashes and optimize server-side resource usage
  39. 39. Automated Mobile Version Usage Monitoring
  40. 40. Automated Mobile Crash Analytics
  41. 41. Questions to answer! Efficiency: How to optimize end user experience, infrastructure & costs? Optimize Top vs Remove Flop Features! Analyze and optimize page load, network traffic and costs! Impact: Do we impact our end users experience? Is the issue in Content Delivery, Network or Server Side? Can users use our services? Crashes? Bad or Slow Responses? Mobile: as First Class Citizen! Usage feedback based on mobile versions & user experience Analyze crashes and optimize server-side resource usage
  42. 42. How Can You Scale in the New DevOps World? New Tech Stack and Architectures 3rd Party / CDN More Apps / Multi-Version „Twitter Driven“ Load Models
  43. 43. Confidential, Dynatrace, LLC Monitoring redefined Every user, every app, everywhere. AI powered, full stack, automated. Full lifecycle - development, test, and production
  44. 44. Confidential, Dynatrace, LLC Complete monitoring coverage for all applications Digital experience analytics Application performance Cloud, container, infrastructure Agents Wire data Synthetics Log data Real user monitoring
  45. 45. Auto Discover Apps, Monitor, Baseline and Alert
  46. 46. Automated Problem and Impact Detection
  47. 47. Automated Problem and Impact Detection
  48. 48. Automatic Integration with ChatOps
  49. 49. Confidential, Dynatrace, LLC A better way Self-service for all Automated monitoring User experience is everything More time innovating, not monitoring
  50. 50. Basic App Monitoring1 App Dependencies2 End User Monitoring3 How to monitor mobile vs desktop vs tablet vs service endpoints? How much network bandwidth is required per app, service and feature? Where to start optimizing bandwidth: CDNs, Caching, Compression? Are our applications up and running? What load patterns do we have per application? What is the resource consumption per application? What are the dependencies between apps, services, DB and infra? How to monitor „non custom app“ tiers? Where are the dependency bottlenecks? Where is the weakest link? Closing the Ops to Dev Feedback Loop: One Step at a Time! “Soft-Launch” Support4 Virtualization Monitoring5 How to automatically monitor virtual and container instances? What to monitor when deploying into public or private clouds? How to deploy and monitor multiple versions of the same app / service? What and how to baseline? Do we have a better or worse version of an app/service/feature? Ops: Need answers to these questions! Closing the gap to AppBizDev Ready for “Cloud Native” How to alert on real problems and not architectural patterns? How to consolidate monitoring between Cloud Native and Enterprise? Who is using our apps? Geo? Device? Which features are used? Whats the behavior? Where to start optimizing? App Flow? Page Size? Conversion Rates? Bounce Rates? Where are the performance / resource hotspots? When and where do applications break? Do we have bad dependencies through code or config? How does the system really behave in production? What to learn for future architecturs? What are the usage patterns for A/B or Green/Blue? Difference between different versions and features? Does the architecture work in these dynamic enviornments? Does scale up/down work as expected? Provide „Monitoring as a Service“ for Cloud Native Application Teams6 Today
  51. 51. DXS DevOps Xcelerator will:  Differentiate your sale  Create value based outcomes  Accelerate growth opportunities Watch the DXS Enablement Course on Dynatrace University! Stop by the DXS networking table to learn more!
  52. 52. confidential Q & A Brian Chandler Sales Engineer @ Dynatrace @Channer531 Andreas Grabner Chief DevOps Activist @ Dynatrace @grabnerandi Try Dynatrace: http://bit.ly/dtsaastrial List to our Podcast: http://bit.ly/pureperf Read more on our blog: http://blog.dynatrace.com

×