Tuning ElasticSearch for multi-terabyte analytics
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Tuning ElasticSearch for multi-terabyte analytics

on

  • 4,837 views

A talk by Andrew Clegg at the ElasticSearch London meetup in November 2013 on how Pearson does large-scale analytical queries on ElasticSearch.

A talk by Andrew Clegg at the ElasticSearch London meetup in November 2013 on how Pearson does large-scale analytical queries on ElasticSearch.

Statistics

Views

Total Views
4,837
Views on SlideShare
4,817
Embed Views
20

Actions

Likes
12
Downloads
63
Comments
0

4 Embeds 20

https://twitter.com 17
http://huhry.dyndns-web.com 1
http://fbweb-test.comoj.com 1
https://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Tuning ElasticSearch for multi-terabyte analytics Presentation Transcript

  • 1. ElasticSearch London Tuning ElasticSearch for multi-terabyte analytics or… “Counting stuff is hard” Andrew Clegg Data Analytics & Visualization Team Pearson @andrew_clegg
  • 2. Introduction
  • 3. Our data Over 11 billion “docs” in production cluster. Each doc is around 1-2KB of JSON. ~60 million docs/day == ~700 docs/sec. Higher than this during peak times. Much higher when backfilling historical data. Conversely: not many end users yet: 5-20 on a typical day.
  • 4. Our architecture Palomino
  • 5. Getting data in Hardware (Yes, actual hardware!) Cisco UCS servers, 24 cores, 96GB memory. 8 x 1TB disks. 7 for data, 1 for log files, temp files, etc. Reads/writes parallelized across segments. Currently 5 of these in production cluster. 10GB switch.
  • 6. Getting data in Index configuration We don’t store any data in ElasticSearch. All we need is facet counts. Disable _source, _all, and individual field storage. Disable term vectors and norms. No analysis on text fields (just unbroken strings). No date autodetection.
  • 7. Getting data in Weekly rolling indices mean shard level can increase as traffic does Time (new index each week) Shard count NB currently we have steady state so it’s set to 5 shards each week. 3 replicas per shard (including primary). Real-time implies: can’t disable replication during indexing!
  • 8. Getting data in Client configuration Multiple writer threads on multiple machines: currently 6 x 3. Bulk API: currently up to 1000 docs per batch. Incoming docs queued until batch limit, or time or size limits, reached. (e.g. 1000 docs or 100000 bytes or 2 secs since last batch)
  • 9. Getting data in Other things we could do -- but currently don’t Tune indexer thread pool size? Tune segment merge policy? Reduce flush interval? Even without these, our current record is over 20,000 docs indexed/sec. (And think the bottleneck was the client machines…)
  • 10. Getting data out Typical queries Date histogram and terms facet are the most common by far. So we wrote our own versions with some optimizations :-) https://github.com/pearson-enabling-technologies/elasticsearch-approx-plugin Field data cache size important for speed: currently 30% of 80GB heap. (In fact it actually uses much more than this, with ES 0.90.2. Upgrade planned!) We always use search_type=count.
  • 11. Getting data out Facet workflow Client request Data nodes: ● Find matching records ● Perform groupings and counts (and any other calculations) ● Return to master Arbitrary master node: ● Parses query ● Distributes subqueries to data nodes (including itself) ● Combines results (reduce function) ● Returns to client
  • 12. Getting data out Facet plugin optimizations Approximate data structures and sampling mode: Trade between speed/memory and accuracy. Uses Lucene’s BytesRef & BytesRefHash instead of String & HashSet. Micro-caching of local calculations, e.g. timestamp rounding. Explicit “render” phase after “reduce” phase: Defer as much as possible until then.
  • 13. Getting data out General advice for plugin writers Minimize object creation/destruction and type conversions. Use arrays of primitives, or Trove collections, where possible. Reuse buffers. Release objects as soon as possible when no longer needed. Lucene has some neat tricks: bit fields, fast hashing algorithms. So does ElasticSearch: CacheRecycler lets you reuse collections.
  • 14. Getting data out Hints for query performance tuning Tools like jmap, jstat, Visual VM and MAT are very helpful. Use ES “hot threads” API to see where it’s spending its time. Set up unit/integration tests with time and RAM instrumentation.
  • 15. Getting data out Other things we could do -- but currently don’t Non-data nodes to parse queries, and handle reduce & render phases. Garbage collector tuning. (Note to self: see if Trove still crashes Java 7 JVM under G1 GC…) Use SSDs :-)
  • 16. Thanks! Any questions? https://github.com/pearson-enabling-technologies/ https://twitter.com/andrew_clegg