Argus Production Monitoring at Salesforce

624 views

Published on

We’ll present details about Argus, a time-series monitoring and alerting platform developed at Salesforce to provide insight into the health of infrastructure as an alternative to systems such as Graphite and Seyren.

Published in: Software
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
624
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Argus Production Monitoring at Salesforce

  1. 1. Argus Production Monitoring At Salesforce Service Health & Observability at Scale Tom Valine Director, Infrastructure Engineering tvaline@salesforce.com in/tvaline Bhinav Sura Software Engineer, Infrastructure Engineering bhinav.sura@salesforce.com in/bhinavsura
  2. 2. What is Argus? ● Time Series Data & Events ● Inbuilt Service Protection ● Alerting ● Flexible Dashboarding ● Full REST API ● High Throughput ● Low Latency ● Horizontally Scalable ● In Use By ○ Capacity Planning ○ Search ○ Feature Teams ○ Site Reliability ○ Customer Success
  3. 3. But Why Another Monitoring System? ● Technology changes frequently! ● Insulate our customers ● Performance ● Trust ● Programmatic access for everything ● Multi-tenancy ● Correlation with non- timeseries data ● Highly dimensional
  4. 4. I’ve seen this somewhere before... Metrics ● Transforms ● Namespace ● Scope ● Name ● Tags ● Aggregator ● Downsampler Events ● Namespace ● Scope ● Name ● Tags ● Type ● User SCALE(-2d:-1d:dva:argus:freemem{host=*}:min:1d-min, $1e-6) TRANSFORM START END NAMESPACE SCOPE METRIC TAGS AGG DS PARAMS -2d:-1d:dva:argus:release{host=*}:major:admin START END NAMESPACE SCOPE NAME TAGS TYPE USER
  5. 5. ● First Class Data ● Decoupled from Time Series ● Multiple Events Per Timestamp ● Event Categories ● Identifiable per User ● Overlay on Any Time Series Events
  6. 6. Alerting ● CRON Format ● Alert on Missing Data ● Single Ended & Range Comparisons ● Inertia ● Cooldown ● Multiple Triggers ● Multiple Notifications ○ Audit ○ Email ○ GOC++ ○ Salesforce Chatter ○ PagerDuty ● Event Backannotation
  7. 7. Warden ● Policy Driven Suspension Mechanism ● Per User ● Application & Subsystem ● Progressively Punitive ● Indefinite Suspension Supported ● Customizeable
  8. 8. Dashboarding ● Maintaining dashboards is a horrible business to be in ● Empower the users, get out of their way ● Markup based ● Custom tags for visualization elements ● HTML for everything else
  9. 9. REST ● API First ● All functionality exposed via services ● Decoupled UI ● Authenticated ○ Login ○ Do stuff ○ Logout ● Get out of User's Way! ○ Orchestra Client ○ ArgusPoke ○ Dashboard Creation Tool
  10. 10. How does it work? METRICS ANNOTATION USER ENTITY ALERTS MAIL SCHEDULING MONITORING WEB SERVICES AUTH ORM MQ TSDB WEB UI CUSTOM APPS OTHER CLIENTS DASHBOARD MANAGEMENT WARDEN NAMESPACE SCHEMA WILDCARDING CACHING INTERLOCK
  11. 11. Okay, but how does it REALLY work? MESSAGE BUS HBASE/TSDB/RDBMS/CACHING UI W S CO RE UI W S CO RE UI W S CO RE UI W S CO RE UI W S CO RE UI W S CO RE UI W S CO RE UI CO RE UI W S CO RE UI W S CO RE UI W S CO RE UI W S CO RE UI W S CO RE UI W S CO RE C L CO RE C L CO RE C L CO RE C L CO RE C L CO RE C L CO RE C L CO RE C L CO RE C L CO RE C L CO RE C L CO RE C L CO RE C L CO RE C L CO RE W S
  12. 12. Cool, how will it evolve going forward? HBASE/TSDB/RDBMS/CACHE UI W S CO RE UI W S CO RE UI W S CO RE UI W S CO RE UI W S CO RE UI W S CO RE UI W S CO RE UI CO RE UI W S CO RE UI W S CO RE UI W S CO RE UI W S CO RE UI W S CO RE UI W S CO RE CO RE CO RE CO RE CO RE CO RE CO RE CO RE CO RE CO RE CO RE CO RE CO RE CO RE CO RE W S HBASE/TSDB/RDBMS/CACHE HBASE/TSDB/RDBMS/CACHE HBASE/TSDB/RDBMS/CACHE HBASE/TSDB/RDBMS/CACHE ROUTE/FORK/JOIN+M/R ROUTE/FORK/JOIN+M/R MESSAGE BUS MESSAGE BUS MESSAGE BUS MESSAGE BUS MESSAGE BUS ROUTE/FORK/JOIN+M/R C L C L C L C L C L C L C L C L C L C L C L C L C L C L
  13. 13. Alert Evaluation Data Flows Message Queue: 1. Scheduling Service updates alert schedule every 10 minutes. 2. Scheduler submits scheduled jobs to queue 3. Minimum interval of 1 minute Alert Client: 1. Dequeues from alert queue. 2. Query ranges adjusted for scheduling latency 3. Triggers evaluated 4. Notifications sent 5. Cooldowns updated. ALERT DATA STORE SCHEUDLING SERVICE ALERT CACHE ARGUS WS ALERT 8713 ... ALERT 4141 ALERT 9810
  14. 14. Metric & Event Data Flows Message Queue: 1. Writes are asynchronous with high degree of parallelism. 2. Queue used as a shock absorber. Tolerant to lower level failures/downtime. 3. Kafka for scalability. One topic each for metrics and annotations. Number of partitions in the order of 100s. ArgusMetricsQueue: 1. Consumed by 2 types of clients: MetricCommit and SchemaCommit 2. MetricCommit client commits the actual time series data to persistent storage (using OTSDB or Phoenix). 3. SchemaCommit client only uses the metric metadata to create metric schema records and commits them to HBase (using AsyncHBase). TIMESERIES STORE ARGUS WS METRIC ... METRIC METRIC METRIC SERVICE SCHEMA STORE
  15. 15. TSDB Service Implementation - OpenTSDB ● Uses HBase underneath ● RowKey: <metric_uid><timestamp><tagk1><tagv1>[...<tagkn><tagvn>]. ● Stores actual time series values on hourly boundaries (All values within an hour stored in the same cell) ● Pros: ○ Extremely fast when you query using complete metric name. ○ 5M datapoints/min write throughput per write daemon. ● Cons: ○ Tag Cardinality - Total number of tags per metric is limited to 8 ○ Tag Cardinality - As product of tag values across all tag keys increases, performance decreases drastically ○ UID Exhaustion - 16M UIDs each for metric, tagk and tagv names by default. Once these are exhausted, no new metrics, tagk or tagv can be created.
  16. 16. TSDB Service Implementation - Phoenix ● Uses HBase underneath ● RowKey: <metric_uid><timestamp><tagv1>[...<tagvn>]. ● Metric modeled as Phoenix VIEW ○ Schema is introspectable and managed outside of data ○ Supports secondary indexes on value and/or tag(s) ● Parallelizes query and pushes computation to server ○ Server-side aggregation conserves network bandwidth ○ Allows SKIP_SCAN filter optimization for minimizing data scanned ○ Leverages ROW_TIMESTAMP optimization for filtering HFiles ● Performance on par or better than OpenTSDB ● Ad hoc SQL query capability ○ Join against other Phoenix tables ● Longer term leverage Drillix (Phoenix + Drill) ○ Cross cluster queries ○ Joins to other non HBase data sources
  17. 17. Schema Service Motivation ● Discover Metrics ○ What all metrics exist within a scope? ○ For a given <scope, metric> combination, what all tags exist? ○ Given a metric, what all scopes contain this metric? ○ What are all the tag values that exist for a given tag key? ● Support Wildcard Queries ○ Non-wildcard query ■ -1h:system.myDatacenter.myPod:Cpu.perc:avg:1m-avg ○ Wildcard query ■ -1h:system.myDatacenter.*:Cpu.perc:avg:1m-avg ■ -1h:system.myDatacenter.myPod:Cpu*:avg:1m-avg ■ -1h:system.myDatacenter.myPod:Cpu.perc{device=*app*}:avg:1m-avg
  18. 18. Schema Service Implementation ● AsyncHBase Schema Service: ○ Uses HBase underneath ○ SchemaRecord: namespace, scope, metricname, tagk, tagv. No data points. ○ Each record indexed in 2 ways in 2 different tables. ○ MetricIndexed schema table: ■ RowKey: <metricname><scope><namespace><tagk><tagv> ○ ScopeIndexed schema table: ■ RowKey: <scope><metricname><namespace><tagk><tagv> ○ Decide what table to use based on the type of query. ○ Pros: ■ Efficient retrieval for schema records for most types of queries ○ Cons: ■ Storage duplication ● DiscoveryService: ○ Uses SchemaService internally ○ Ability to filter records by type ■ For e.g. Filter all unique scopes that match *myScope* ○ Expand Wildcard query and return a collection of non-wildcard queries
  19. 19. Caching ● CachedTSDB Service: ○ Uses RedisCache service and the configured TSDBService implementation (OpenTSDB or PhoenixTSDB) ○ Query Level Caching (caches synthetic data) ○ Caches data spanning a window of more than last 24 hours. ○ Data is cached by fracturing it on day boundary. ■ For e.g.: Query spanning 5 days is stored using 5 keys on the cache. ○ Support for partial hits ○ Cache expiry time of an hour (can be increased by running a separate Cache update process) ● CachedDiscovery Service: ○ Uses RedisCache service and the configured DiscoveryService implementation ○ Cache queries already expanded ○ Cache expiry time of a day
  20. 20. Developed By ● Anand Subramanian ● Bhinav Sura ● Tom Valine ● Jigna Bhatt ● Ruofan Zhang ● Dilip Devaraj ● Raj Sarkapally ● Kiran Gowdru More Information ​https://github.com/SalesforceEng/Argus
  21. 21. thank y u

×