Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Scaling SolrCloud to a large
number of Collections
Shalin Shekhar Mangar, Lucidworks Inc.
shalin@apache.org
twitter.com/sh...
Apache Solr has a huge install base and tremendous momentum.
SOLRmost widely used search
solution on the planet.
8M+
total...
Solr scalability is unmatched.
• box.com (Dropbox for business)
• 10TB+ Index Size
• 10 Billion+ Documents
• 100 Million+ ...
Solr scalability is unmatched.
The traditional search use-case
• One large index distributed across multiple nodes
• A large number of users sharing the ...
Example: Product Catalog
Must search across all products
Subset of optional features in Solr to enable
and simplify horizontal scaling a search index
using sharding and replicatio...
Terminology
• ZooKeeper: Distributed coordination service that provides centralised configuration,
cluster state management...
Collection with 2 shards across 4 nodes with replication factor 2
iv
Jetty (node 2, port 8984)
Solr webapp
logstash4solr
s...
“The limits of the possible can only be
defined by going beyond them into the
impossible” — Arthur C. Clarke
The curious case of multi-tenant platforms
• Multi-tenant platform for storage and search
• Thousands of tenant applicatio...
One SolrCloud collection per tenant
• Searches are specialised to a user’s data or the
tenant application’s dataset
• Some...
Measure and optimise
• Analyze and find missing features
• Setup a performance testing environment on AWS
• Devise tests fo...
Problem #1: Cluster state and updates
• The SolrCloud cluster state has information about the
collections, their shards an...
Solution - Split cluster state and scale
• Each collection gets it’s own state node in ZK
• Nodes selectively watch only t...
Problem #2: Overseer performance
• Thousands of collections create a lot of state
updates
• Overseer falls behind and repl...
Solution - Improve the overseer
• Harden the overseer code against ZooKeeper
connection loss (SOLR-5325)
• Optimise pollin...
Problem #3: Moving data around
• Not all users are born equal - A tenant may have a
few very large users
• We wanted to be...
Solution: Improved data management
• Shard can be split on arbitrary hash ranges
(SOLR-5300)
• Shard can be split by a giv...
Problem #4: Exporting data
• Lucene/Solr are designed for finding top-N search
results
• Trying to export full result set b...
Solution - Distributed deep paging
New ‘cursorMark’ feature for deep paging (SOLR-5463)
–twitter.com/UweSays
“The JVM is completely irresponsible and can
only be killed with ‘kill -9’”
JVM Bugs!
“Testing scale” at scale
• Performance goals: 6 billion documents, 4000 queries/
sec, 400 updates/sec, 2 seconds NRT susta...
How to manage large SolrCloud clusters
• Developed Solr Scale Toolkit
• Fabric based tool to setup and manage SolrCloud
cl...
Gathering metrics and analysing logs
• LucidWorks SiLK (Solr + Logstash + Kibana)
• collectd daemons on each host
• rabbit...
Generating data and load
• Custom randomized data generator (re-producible
using a seed)
• JMeter for generating load
• Em...
Numb3rs
• 30 hosts, 120 nodes, 1000 collections, 8B+ docs,
15000 queries/second, 2000 writes/second, 2 second
NRT sustaine...
Not over yet
• We continue to test performance at scale
• Published indexing performance benchmark,
working on others
• 15...
Our users are also pushing the limits
https://twitter.com/bretthoerner/status/476830302430437376
Up, up and away!
https://twitter.com/bretthoerner/status/476838275106091008
Not over yet
• SolrCloud continues to be improved
• SOLR-6220 - Replica placement strategy
• SOLR-6273 - Cross data center...
Questions?
• Shalin Shekhar Mangar
• shalin@apache.org
• twitter.com/shalinmangar
• meetup.com/Bangalore-Apache-Solr-Lucen...
Scaling SolrCloud to a Large Number of Collections - Fifth Elephant 2014
Scaling SolrCloud to a Large Number of Collections - Fifth Elephant 2014
Scaling SolrCloud to a Large Number of Collections - Fifth Elephant 2014
Upcoming SlideShare
Loading in …5
×

Scaling SolrCloud to a Large Number of Collections - Fifth Elephant 2014

10,081 views

Published on

The traditional and typical search use case is the one large search collection distributed among many nodes and shared by all users. However, there is a class of applications which need a large number of small or medium collections which can be used, managed and scaled separately. This talk will cover our effort in helping a client set up a large scale SolrCloud setup with thousands of collections running on hundreds of nodes. I will describe the bottlenecks that we found in SolrCloud when running a large number of collections. I will also take you through the multiple features and optimizations that we contributed to Apache Solr to reduce or remove the choke points in the system. Finally, I will talk about the benchmarking process and the lessons learned from supporting such an installation in production.

Published in: Software
  • Follow the link, new dating source: ❤❤❤ http://bit.ly/2ZDZFYj ❤❤❤
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Dating for everyone is here: ❶❶❶ http://bit.ly/2ZDZFYj ❶❶❶
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

Scaling SolrCloud to a Large Number of Collections - Fifth Elephant 2014

  1. 1. Scaling SolrCloud to a large number of Collections Shalin Shekhar Mangar, Lucidworks Inc. shalin@apache.org twitter.com/shalinmangar
  2. 2. Apache Solr has a huge install base and tremendous momentum. SOLRmost widely used search solution on the planet. 8M+ total downloads Solr is both established & growing 250,000+ monthly downloads Solr has tens of thousands of applications in production. You use Solr everyday. Largest community of developers. 2500+open Solr jobs.
  3. 3. Solr scalability is unmatched. • box.com (Dropbox for business) • 10TB+ Index Size • 10 Billion+ Documents • 100 Million+ Daily Requests
  4. 4. Solr scalability is unmatched.
  5. 5. The traditional search use-case • One large index distributed across multiple nodes • A large number of users sharing the data • Searches across the entire cluster
  6. 6. Example: Product Catalog Must search across all products
  7. 7. Subset of optional features in Solr to enable and simplify horizontal scaling a search index using sharding and replication. ! Goals scalability, performance, high-availability, simplicity, and elasticity What is SolrCloud?
  8. 8. Terminology • ZooKeeper: Distributed coordination service that provides centralised configuration, cluster state management, and leader election • Node: JVM process bound to a specific port on a machine • Collection: Search index distributed across multiple nodes with same configuration • Shard: Logical slice of a collection; each shard has a name, hash range, leader and replication factor. Documents are assigned to one and only one shard per collection using a hash-based document routing strategy • Replica: A copy of a shard in a collection • Overseer: A special node that executes cluster administration commands and writes updated state to ZooKeeper. Automatic failover and leader election.
  9. 9. Collection with 2 shards across 4 nodes with replication factor 2 iv Jetty (node 2, port 8984) Solr webapp logstash4solr shard1 Replica Java VM iv Jetty (node 1, port 8983) Solr webapp logstash4solr shard1 Leader Java VM iv Jetty (node 4, port 8986) Solr webapp logstash4solr shard2 Replica Java VM iv Jetty (node 3, port 8985) Solr webapp logstash4solr shard2 Leader Java VM Sharding Replication Replication Zookeeper 1 Zookeeper 2 Zookeeper 3 Leader ElectionCentralized configuration management ZooKeeper Ensemble HTTP APIs XML/JSON/CSV/PDF Java/Ruby/Python/PHP Millions of documents, millions of users
  10. 10. “The limits of the possible can only be defined by going beyond them into the impossible” — Arthur C. Clarke
  11. 11. The curious case of multi-tenant platforms • Multi-tenant platform for storage and search • Thousands of tenant applications • Each tenant application has millions of users
  12. 12. One SolrCloud collection per tenant • Searches are specialised to a user’s data or the tenant application’s dataset • Some tenants create a lot of data, others very little • Some use CPU intensive geo-spatial queries, some just perform simple full text searches and sorting • Some are write-heavy, others read-heavy • Some have text in a different natural language
  13. 13. Measure and optimise • Analyze and find missing features • Setup a performance testing environment on AWS • Devise tests for stability and performance • Find bugs and bottlenecks and fix ’em
  14. 14. Problem #1: Cluster state and updates • The SolrCloud cluster state has information about the collections, their shards and replicas • All nodes and (Java) clients watch the cluster state • Every state change is notified to all nodes • Limited to (slightly less than) 1MB by default • 1 node bounce triggers a few 100 watcher fires and pulls from ZK for a 100 node cluster (three states: down, recovering, active)
  15. 15. Solution - Split cluster state and scale • Each collection gets it’s own state node in ZK • Nodes selectively watch only those states which they are a member of • Clients cache state and use smart cache updates instead of watching nodes • http://issues.apache.org/jira/browse/SOLR-5473
  16. 16. Problem #2: Overseer performance • Thousands of collections create a lot of state updates • Overseer falls behind and replicas can’t recover or can’t elect a leader • Under high indexing/search load, GC pauses can cause overseer queue to back up
  17. 17. Solution - Improve the overseer • Harden the overseer code against ZooKeeper connection loss (SOLR-5325) • Optimise polling for new items in overseer queue (SOLR-5436) • Dedicated overseers nodes (SOLR-5476) • New Overseer Status API (SOLR-5749) • Asynchronous execution of collection commands (SOLR-5477, SOLR-5681)
  18. 18. Problem #3: Moving data around • Not all users are born equal - A tenant may have a few very large users • We wanted to be able to scale an individual user’s data — maybe even as it’s own collection • SolrCloud can split shards with no downtime but it only splits in half • No way to ‘extract’ user’s data to another collection or shard
  19. 19. Solution: Improved data management • Shard can be split on arbitrary hash ranges (SOLR-5300) • Shard can be split by a given key (SOLR-5338, SOLR-5353) • A new ‘migrate’ API to move a user’s data to another (new) collection without downtime (SOLR-5308)
  20. 20. Problem #4: Exporting data • Lucene/Solr are designed for finding top-N search results • Trying to export full result set brings down the system due to high memory requirements as you go deeper
  21. 21. Solution - Distributed deep paging New ‘cursorMark’ feature for deep paging (SOLR-5463)
  22. 22. –twitter.com/UweSays “The JVM is completely irresponsible and can only be killed with ‘kill -9’” JVM Bugs!
  23. 23. “Testing scale” at scale • Performance goals: 6 billion documents, 4000 queries/ sec, 400 updates/sec, 2 seconds NRT sustained performance • 5% large collections (50 shards), 15% medium (10 shards), 85% small (1 shard) with replication factor of 3 • Target hardware: 24 CPUs, 126G RAM, 7 SSDs (460G) + 1 HDD (200G) • 80% traffic served by 20% of the tenants
  24. 24. How to manage large SolrCloud clusters • Developed Solr Scale Toolkit • Fabric based tool to setup and manage SolrCloud clusters in AWS complete with collectd and SiLK • Backup/Restore from S3. Parallel clone commands. • Open source! • https://github.com/LucidWorks/solr-scale-tk
  25. 25. Gathering metrics and analysing logs • LucidWorks SiLK (Solr + Logstash + Kibana) • collectd daemons on each host • rabbitmq to queue messages before delivering to log stash • Initially started with Kafka but discarded thinking it is overkill • Not happy with rabbitmq — crashes/unstable • Might try Kafka again soon • http://www.lucidworks.com/lucidworks-silk
  26. 26. Generating data and load • Custom randomized data generator (re-producible using a seed) • JMeter for generating load • Embedded CloudSolrServer (Solr Java client) using JMeter Java Action Sampler • JMeter distributed mode was itself a bottleneck! • Not open source (yet) but we’re working on it!
  27. 27. Numb3rs • 30 hosts, 120 nodes, 1000 collections, 8B+ docs, 15000 queries/second, 2000 writes/second, 2 second NRT sustained over 24-hours • More than 3x the numbers our client needed • Unfortunately, we had to stop testing at that point :( • Turned out they had a 95-5 traffic ratio than a 80-20 ratio so actual performance is even better :) • Our biggest cluster cost us just $120/hour :)
  28. 28. Not over yet • We continue to test performance at scale • Published indexing performance benchmark, working on others • 15 nodes, 30 shards, 1 replica, 157195 docs/sec • 15 nodes, 30 shards, 2 replicas, 61062 docs/sec • http://searchhub.org/introducing-the-solr-scale- toolkit/
  29. 29. Our users are also pushing the limits https://twitter.com/bretthoerner/status/476830302430437376
  30. 30. Up, up and away! https://twitter.com/bretthoerner/status/476838275106091008
  31. 31. Not over yet • SolrCloud continues to be improved • SOLR-6220 - Replica placement strategy • SOLR-6273 - Cross data center replication • SOLR-5656 - Auto-add replicas • SOLR-5986 - Don’t allow runaway queries to harm the cluster • Many, many more
  32. 32. Questions? • Shalin Shekhar Mangar • shalin@apache.org • twitter.com/shalinmangar • meetup.com/Bangalore-Apache-Solr-Lucene- Group/ • www.meetup.com/Bangalore-Baby-Apache-Solr- Group/

×