Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Machine Learning for automated diagnosis of distributed ...AE

472 views

Published on

  • Be the first to comment

  • Be the first to like this

Machine Learning for automated diagnosis of distributed ...AE

  1. 1. Machine Learning for Automated Diagnosis of Distributed Systems Performance Ira Cohen HP-Labs June 2006 http://www.hpl.hp.com/personal/Ira_Cohen
  2. 2. Intersection of systems and ML/Data mining: Growing (research) area <ul><li>Berkeley’s RAD lab (Reliable Adaptable Distributed systems lab) got $7.5mil from Google, Microsoft and Sun for: </li></ul><ul><ul><li>“… adoption of automated analysis techniques from Statistical Machine Learning (SML), control theory, and machine learning, to radically improve detection speed and quality in distributed systems” </li></ul></ul><ul><li>Workshops devoted to area (e.g., SysML), papers in leading system and data mining conferences </li></ul><ul><li>Part of IBM’s “Autonomic Computing” and HP’s Adaptive Enterprise visions </li></ul><ul><li>Startups (e.g., Splunk, LogLogic) </li></ul><ul><li>And more… </li></ul>
  3. 3. SLIC project at HP-Labs * : Statistical learning inference and control <ul><li>Research objective: Provide technology enabling automated decision making, management and control of complex IT systems. </li></ul><ul><ul><li>Explore statistical learning, decision theory and machine learning as basis for automation. </li></ul></ul><ul><li>* Participants/Collaborators: Moises Goldszmidt, Julie Symons, Terence Kelly, Armando Fox, Steve Zhang, Jeff Chase, Rob Powers, Chengdu Huang, Blaine Nelson </li></ul>I’ll Focus today on Performance diagnosis
  4. 4. Intuition: Why is performance diagnosis hard? <ul><li>What do you do when your PC is slow? </li></ul>
  5. 5. Why care about performance? <ul><li>Answer: It costs companies BIG money </li></ul><ul><li>Analysts estimate that poor application performance costs U.S.-based companies approximately $27 billion each year </li></ul><ul><li>Performance management software products revenue growing at double digit % every year! </li></ul>
  6. 6. Challenges today in diagnosing/forecasting IT performance problems <ul><li>Distributed systems/services are complex </li></ul><ul><ul><li>Thousands of systems/services/applications is typical </li></ul></ul><ul><ul><li>Multiple levels of abstractions and interactions between components </li></ul></ul><ul><ul><li>Systems/Applications change rapidly </li></ul></ul><ul><li>Multiple levels of responsibility (infrastructure operators, application operators, DBAs, …) --> a lot of finger pointing </li></ul><ul><ul><li>Problems can take days/weeks to resolve </li></ul></ul><ul><li>Loads of data, no actionable information </li></ul><ul><ul><li>Operators manually search for needle in haystack </li></ul></ul><ul><ul><li>Multiple types of data sources --- lack of unifying tools to even view data </li></ul></ul><ul><li>Operators hold past diagnosis efforts in their head - history of diagnosis efforts mostly lost. </li></ul>
  7. 7. Translation to Machine Learning Challenges <ul><li>Transforming data to information : Classification, feature selection methods – with need for explanation </li></ul><ul><li>Adaptation : Learning with concept drift </li></ul><ul><li>Leveraging history : Transforming diagnosis to an information retrieval problem, clustering methods, etc. </li></ul><ul><li>Using multiple data sources : combining structured and semi-structured data </li></ul><ul><li>Scalable machine learning solutions : distributed analysis, transfer learning </li></ul><ul><li>Using human feedback (human in the loop) : semi-supervised learning (active learning, semi-supervised clustering) </li></ul>
  8. 8. Outline <ul><li>Motivation (already behind us…) </li></ul><ul><li>Concrete example: The state of distributed performance management today </li></ul><ul><li>ML challenges </li></ul><ul><ul><li>examples of research results </li></ul></ul><ul><li>Bringing in all together as a tool: Providing diagnostic capabilities as a centrally managed service </li></ul><ul><li>Discussion/Summary </li></ul>
  9. 9. Example: A real distributed HP Application architecture Geographically distribution 3-tier application Results shown today are from last 19+ months of data collected from this service
  10. 10. Application performance “management”: Service Level Objectives (SLO) Unhealthy = SLO Violation
  11. 11. Detection is not enough… <ul><li>Leverage history : </li></ul><ul><ul><li>Did we see similar problems in the past? </li></ul></ul><ul><ul><li>What were the repair actions? </li></ul></ul><ul><ul><li>Do/Did they occur in other data centers? </li></ul></ul><ul><li>Triage: </li></ul><ul><ul><li>What are the symptoms of the problem? </li></ul></ul><ul><ul><li>Who do I call? </li></ul></ul><ul><li>Can we forecast these problems? </li></ul><ul><li>Problem prioritization </li></ul><ul><ul><li>How many different problems are there and their severity? </li></ul></ul><ul><ul><li>Which are recurrent? </li></ul></ul>Unhealthy
  12. 12. Challenge 1: Transforming data to information… <ul><li>Many measurements (metrics) available on IT-systems (OpenView, Tivoli, etc…) </li></ul><ul><ul><li>System/application metrics: CPU, memory, disk, network utilizations, queues, etc... </li></ul></ul><ul><ul><li>Measured on a regular basis (1-5 minutes with commercial tools). </li></ul></ul><ul><li>Other semi-structure data (log files) </li></ul>Where is the relevant information?
  13. 13. ML Approach: Model using Classifiers <ul><li>Leverage all the data collected in the infrastructure to: </li></ul><ul><li>Use classifiers: F(M) -> SLO state </li></ul><ul><li>Classification accuracy is a measure of success </li></ul><ul><li>Use feature selection to find most predictive metrics of SLO state </li></ul>Unhealthy F(M ,SLO)
  14. 14. But we need an explanation, not just classification accuracy... P(M|SLO) Our approach: Learn joint probability distribution (Bayesian network classifiers) Unhealthy P(M,SLO) Normal Metric has a value associated with healthy behavior Abnormal Metric has a value associated with unhealthy behavior Inferences (“ metric attribution ”):
  15. 15. Bayesian network classifiers: Results <ul><li>“ Fast”: (in the context of 1-5 mins data collection) </li></ul><ul><ul><li>Models takes 2-10 seconds to train on days worth of data </li></ul></ul><ul><ul><li>Metric attribution: Takes 1ms-10ms to compute </li></ul></ul><ul><li>Found that order of 3-10 metrics are needed (out of hundreds) to capture accurately a performance problem </li></ul><ul><li>Accuracy is high (~90%)* </li></ul><ul><li>Experiments showed metrics are useful for diagnosing certain problems on real systems </li></ul><ul><li>Hard to capture with single model multiple types of performance problems! </li></ul>SLO State M3 M30 M32 M5 M8
  16. 16. Additional issues <ul><li>How much data is needed to get accurate models? </li></ul><ul><li>How to detect model validity? </li></ul><ul><li>How to present models/results to operators? </li></ul>
  17. 17. Challenge 2: Adaptation <ul><li>Systems and application change </li></ul><ul><li>Reasons for performance problems change over time (and sometimes recur) </li></ul>Learning with “Concept drift” Different? Same problem?
  18. 18. Adaptation: Possible approaches <ul><li>Single omniscient model: “Train once, use forever” </li></ul><ul><ul><li>Assumes training data provides all information. </li></ul></ul><ul><li>Online updating of model </li></ul><ul><ul><li>E.g., parameter/structure updating of Bayesian networks, online learning of Neural networks, Support vector machines, etc. </li></ul></ul><ul><ul><li>Potentially wasteful retraining when similar problems reoccur </li></ul></ul><ul><li>Maintain ensemble of models </li></ul><ul><ul><li>Requires criteria for choosing subset of models in inference. </li></ul></ul><ul><ul><li>Criteria for adding new models to ensemble </li></ul></ul><ul><ul><li>Criteria for removing models from ensemble </li></ul></ul>
  19. 19. Our approach: Managing an ensemble of models for our classification approach <ul><li>Periodically induce a new model </li></ul><ul><li>Check whether the model adds new information (classification accuracy) </li></ul><ul><li>Update the ensemble of models </li></ul>Construction Inference: Use Brier score for selection of models
  20. 20. Adaptation: Results <ul><li>~7500 samples, 5 mins/sample (one month), ~70 metrics </li></ul><ul><li>Classifying a sample with the Ensemble of BNCs: </li></ul><ul><ul><li>Used model with best Brier Score for predicting class (winner takes all) </li></ul></ul><ul><ul><ul><li>Brier score was better than other measures (e.g., accuracy, likelihood) </li></ul></ul></ul><ul><ul><ul><li>Winner takes all was more accurate than other combination approaches (e.g., majority voting) </li></ul></ul></ul>0.9 84.2 Single model with sliding window 7.1 90.7 Ensemble of Models 71.5 82.4 Single model trained with all history (no forgetting) 0.2 61.4 Single model: No Adaptation Total Processing Time (mins) Accuracy (%)
  21. 21. Adaptation: Result <ul><li>“ Single adaptive” slower to adapt to recurrent issues </li></ul><ul><ul><li>Must re-learn behavior, instead of just selecting a previous model </li></ul></ul>
  22. 22. Additional issues <ul><li>Need criteria for “aging” models </li></ul><ul><li>Periods of “good” behavior also change: Need robustness to those changes as well. </li></ul>
  23. 23. Challenge 3: Leveraging history <ul><li>It would be great to have the following system: </li></ul>Diagnosis : Stuck thread due to insufficient Database connections Repair : Increase connections to +6 Periods : : : : Severity : SLO time increases up to 10secs : : Location : Americas. Not seen in Asia/Pacific
  24. 24. Leveraging history <ul><li>Main challenge: Find a representation ( signature ) that captures the main characteristics of the system behavior that is: </li></ul><ul><ul><li>Amenable to distance metrics </li></ul></ul><ul><ul><li>Generated automatically </li></ul></ul><ul><ul><li>In Machine readable form </li></ul></ul>Diagnosis : Stuck thread due to insufficient Database connections Repair : Increase connections to +6 Periods : : : : Severity : SLO time increases up to 10secs : : Location : Americas. Not seen in Asia/Pacific
  25. 25. Our approach to defining signatures 1) Learn probabilistic classifiers 2) Inferences: Metric Attribution Unhealthy Models P(SLO,M) DB cpu util high app active proc high app alive proc high app cpu util Abnormal metrics 3) Define these as signatures of the problems
  26. 26. Example: Defining a signature <ul><li>For a given SLO violation, the models provide a list of metrics that are attributed with the violation. </li></ul><ul><li>Metric has value 1 if it is attributed with the violation, -1 if it is not attributed, 0 if it is not relevant, e.g.: </li></ul>Attri- bution
  27. 27. Results: With signatures… <ul><li>We were able to accurately retrieve past occurrences of similar performance problems with the diagnosis efforts </li></ul><ul><li>ML technique: Information retrieval </li></ul>Diagnosis : Stuck thread due to insufficient Database connections Repair : Increase connections to +6 Periods : : : : Severity : SLO time increases up to 10secs : : Location : Americas. Not seen in Asia/Pacific
  28. 28. Results: Retrieval accuracy Retrieval of &quot;Stuck Thread&quot; problem Top 100: 92 vs 51 Ideal P-R curve
  29. 29. Results: With signatures we can also… <ul><li>Automatically identify groups of different problems and their severity </li></ul><ul><li>Identify which are recurrent </li></ul><ul><li>ML technique: Clustering </li></ul>
  30. 30. Additional issues <ul><li>Can we generalize and abstract signatures for different systems/applications? </li></ul><ul><li>How to incorporate human feedback for retrieval and clustering? </li></ul><ul><ul><li>Semi-supervised learning: results not shown today </li></ul></ul>
  31. 31. Challenge 4: Combining multiple data sources <ul><li>We have a lot of semi-structured text logs, e.g., </li></ul><ul><ul><li>Problem tickets </li></ul></ul><ul><ul><li>Event/error logs (application/system/security/network…) </li></ul></ul><ul><ul><li>Other logs (e.g., operators actions) </li></ul></ul><ul><li>Logs can help obtain more accurate diagnosis and models – sometimes system/application metrics not enough </li></ul><ul><li>Challenges: </li></ul><ul><ul><li>Transforming logs to “features”: information extraction </li></ul></ul><ul><ul><li>Doing it efficiently! </li></ul></ul>
  32. 32. Properties of logs <ul><li>Logs events have relatively short text messages </li></ul><ul><li>Much of the diversity in messages comes from different “parameters” – dates, machine/component names. Core is less unique compared to free text. </li></ul><ul><li>Amount of events can be huge (e.g., >100 million events per day for large IT systems) </li></ul><ul><li>Processing events needs to compress logs significantly while doing it efficiently! </li></ul>
  33. 33. Our approach: Processing application error-logs <ul><li>Significant reduction of messages </li></ul><ul><ul><li>200,000  190 </li></ul></ul><ul><li>Accurate </li></ul><ul><ul><li>Clustering results validated with hierarchical tree clustering algorithm </li></ul></ul>2006-02-26T00:00:06.461 ES_Domain:ES_hpat615_01:2257913:Thread43.ES82|commandchain.BaseErrorHandler.logException()|FUNCTIONAL|0||FatalException occurred type=com.hp.es.service.productEntitlement.knight.logic.access.KnightIOException, message=Connection timed out, class=com.hp.es.service.productEntitlement.knight.logic.RequestKnightResultMENUCommand 2006-02-26T00:00:06.465 ES_Domain:ES_hpat615_01:22579163:Thread-43.ES82|com.hp.es.service.productEntitlement.combined.errorhandling.DefaultAlwaysEIAErrorHandlerRed.handleException()|FATAL|2706||KNIGHT system unavailable: java.io.IOException 2006-02-26T00:00:06.465 ES_Domain:ES_hpat615_01:22579163:Thread-43.ES82|com.hp.es.service.productEntitlement.combined.errorhandling.DefaultAlwaysEIAErrorHandlerRed.handleException()|FATAL|0||com.hp.es.service.productEntitlement.knight.logic.RequestKnightResultMENUCommand message: Connection timed out causing exception type: java.io.IOException KNIGHT URL accessed: http://vccekntpro.cce.hp.com/knight/knightwarrantyservice.asmx 2006-02-26T00:00:06.466 ES_Domain:ES_hpat615_01:22579163:Thread-43.ES82|com.hp.es.service.productEntitlement.combined.errorhandling.DefaultAlwaysEIAErrorHandlerRed.handleException()|FATAL|0||com.hp.es.service.productEntitlement.knight.logic.access.KnightIOException: Connection timed out 2006-02-26T00:00:08.279 ES_Domain:ES_hpat615_01:22579163:ExecuteThread: '16' for 'weblogic.kernel.Default'.ES82|com.hp.es.service.productEntitlement.combined.MergeAllStartedThreadsCommand.setWaitingFinished()|WARNING|3709||2006-02-26T00:00:08.279 ES_Domain:ES_hpat615_01:22579163:ExecuteThread: '16' for 2006-02-26T00:00:06.465 ES_Domain:ES_hpat615_01:22579163:Thread-43.ES82|com.hp.es.service.productEntitlement.combined.errorhandling.DefaultAlwaysEIAErrorHandlerRed.handleException()|FATAL|0||com.hp.es.service.productEntitlem Over 4,000,000 error log entries 200,000+ distinct error messages Use count of appearances over 5-minute intervals of the features messages as metrics for learning Similarity-based Sequential Clustering 190 “feature messages”
  34. 34. Learning Probabilistic Models <ul><li>Construct probabilistic models metrics using a “hybrid-gamma distribution” (Gamma distribution with zeros) </li></ul># of appearances PDF
  35. 35. Results: Adding Log based metrics <ul><li>Signatures using error logs metrics pointed to the right causes in 4 out of 5 “High” severity incidents in past 2 months </li></ul><ul><ul><li>System metrics were not related to the problems in these cases </li></ul></ul>From Operator Incident Report: Diagnosis and Solution: Unable to start SWAT wrapper. Disk usage reached 100%. Cleaned up disk and restarted the wrapper… CORBA access failure: IDL:hpsewrapper/SystemNotAvailableException:… com.hp.es.wrapper.corba.hpsewrapper.SystemNotAvailableException From Application Error Log:
  36. 36. Additional issues <ul><li>With multiple instances of an application – how to do joint, efficient processing of the logs? </li></ul><ul><li>Treating events as sequences in time could lead to more accuracy and compression. </li></ul>
  37. 37. Challenge 5: Scaling up Machine Learning techniques <ul><li>Large scale distributed applications have various level of dependencies </li></ul><ul><ul><li>Multiple instances of components </li></ul></ul><ul><ul><li>Shared resources (DB, network, software components) </li></ul></ul><ul><ul><li>Thousands to millions of metrics (features) </li></ul></ul>A B C D E
  38. 38. Challenge 5: Possible approaches <ul><li>Scalable approach: Ignore dependencies between components </li></ul><ul><ul><li>Putting head in the sand? </li></ul></ul><ul><ul><li>See Werner Vogel’s (Amazon’s CTO) thoughts on it… </li></ul></ul><ul><li>Centralized approach: Use all available data together for building models. </li></ul><ul><ul><li>Not scalable </li></ul></ul><ul><li>A different approach: Transfer models, not metrics. </li></ul><ul><ul><li>Good for components that are similar and/or have similar measurements </li></ul></ul>
  39. 39. Example: Diagnosis with Multiple Instances <ul><li>Method 1: diagnosing multiple instances by sharing measurement data (metrics) </li></ul>A B
  40. 40. Diagnosis with Multiple Instances <ul><li>Method 1: diagnosing multiple instances by sharing measurement data (metrics) </li></ul>B C D E F G H A
  41. 41. <ul><li>Method 2: diagnosing multiple instances by sharing learning experience (models) </li></ul><ul><ul><li>A form of transfer learning </li></ul></ul>A B Diagnosis with Multiple Instances
  42. 42. <ul><li>Method 2: diagnosing multiple instances by sharing learning experience (models) </li></ul>B C D E F G H Diagnosis with Multiple Instances A
  43. 43. Metric Exchange: Does it help? <ul><li>Building models based on metrics of other instances </li></ul><ul><li>Observation: metric exchange does not improve model performance for load-balanced instances </li></ul>Time Epoch Online Prediction Time Epoch Online Prediction Violation detection w/ model exchange Violation detection w/o model exchange False Alarm Instance 1 Instance 2
  44. 44. <ul><li>Apply models trained on other instances </li></ul><ul><li>Observation 1: model exchange enables quicker recognition of previously unseen problem types </li></ul><ul><li>Observation 2: model exchange reduces model training cost </li></ul>Model Exchange: Does it help? Time Epoch Online Prediction Violation detection w/o model exchange Violation detection w/ model exchange False alarm w/ model exchange False alarm w/o model exchange Models imported from other instances improve accuracy
  45. 45. Additional issues <ul><li>How do/Can we do transfer learning on similar but not identical instances? </li></ul><ul><li>More efficient methods for detecting which data is needed from related components during diagnosis </li></ul>
  46. 46. Providing diagnosis as a web service: SLIC’s IT-Rover <ul><li>Centralized diagnosis web service allows: </li></ul><ul><ul><li>Retrieval across different data centers/different services/possibly different companies </li></ul></ul><ul><ul><li>Fast deployment of new algorithms </li></ul></ul><ul><ul><li>Better understanding of real problems for further development of algorithms </li></ul></ul><ul><ul><li>Value of portal is in the information (“Google” for systems) </li></ul></ul>Metrics/SLO Monitoring Signature construction engine Signature DB Clustering engine Retrieval engine Monitored Services Admin
  47. 47. Discussion: Additional issues, opportunities, and challenges <ul><li>Beyond the “black box”: Using domain knowledge </li></ul><ul><ul><li>Expert knowledge </li></ul></ul><ul><ul><li>Topology information </li></ul></ul><ul><ul><li>Use known dependencies and causal relationship between components </li></ul></ul><ul><li>Provide solutions in cases where SLOs are not known </li></ul><ul><ul><li>Learn relationship between business objectives and IT performance </li></ul></ul><ul><ul><li>Anomaly detection methods with feedback mechanisms </li></ul></ul><ul><li>Beyond diagnosis: Automated control and decision making </li></ul><ul><ul><li>HP-Labs work on applying adaptive controllers for controlling systems/applications </li></ul></ul><ul><ul><li>IBM Labs work using reinforcement learning for resource allocation </li></ul></ul>
  48. 48. <ul><li>Presented several challenges at the intersection machine learning and IT automated diagnosis </li></ul><ul><li>A relatively new area for machine learning and data mining researchers and practitioners </li></ul><ul><li>Many more opportunities and challenges ahead: research and product/business wise… </li></ul><ul><li>Read more: www.hpl.hp.com/research/slic </li></ul><ul><ul><li>SOSP-05, DSN-05, HotOS-05, KDD-05, OSDI-04 </li></ul></ul>Summary
  49. 49. Publications: <ul><li>Ira Cohen, Steve Zhang, Moises Goldszmidt, Julie Symons,  Terence Kelly, Armando Fox, &quot; Capturing, Indexing, Clustering, and Retrieving System History &quot;, SOSP 2005. </li></ul><ul><li>Rob Powers, Ira Cohen, and Moises Goldszmidt, &quot; Short term performance forecasting in enterprise systems &quot;,  KDD 2005. </li></ul><ul><li>Moises Goldszmidt, Ira Cohen, Armando Fox and Steve Zhang, &quot; Three research challenges at the intersection of machine learning, statistical induction, and systems &quot;, HOTOS 2005. </li></ul><ul><li>Steve Zhang, Ira Cohen, Moises Goldszmidt, Julie Symons, Armando Fox, &quot; Ensembles of models for automated diagnosis of system performance problems &quot;,  DSN 2005. </li></ul><ul><li>Ira Cohen, Moises Goldszmidt, Terence Kelly, Julie Symons, Jeff Chase, &quot; Correlating instrumentation data to system states: A building block for automated diagnosis and control &quot;, OSDI, 2004. </li></ul><ul><li>George Forman and Ira Cohen, &quot; Beware the null hypothesis &quot;, European Conference on Machine Learning/ European Conference on Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD) 2005. </li></ul><ul><li>Ira Cohen and Moises Goldszmidt, &quot; Properties and Benefits of Calibrated Classifiers &quot;, European Conference on Machine Learning/ European Conference on Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD) 2004. </li></ul><ul><li>George Forman and Ira Cohen, &quot; Learning from Little: Comparison of Classifiers given Little Training &quot;, European Conference on Machine Learning/ European Conference on Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD) 2004. </li></ul>

×