A talk that discusses two topics regarding Elasticsearch - multitenancy and scalability and what are the technical details to achieving them efficiently
2. ABOUT ME
• Founder at LogSentinel, an information security startup
• LogSentinel SIEM – product that indexes billions of logs with Elasticsearch
• https://techblog.bozho.net
• https://twitter.com/bozhobg
3. SCALABILITY AND MULTITENANCY
• Scalability – how to process millions (billions) of documents on multiple machines
• Multitenancy – how to have our system support multiple users/organizations while
segregating their data
• One can exist without the other
• Both are architectural and implementation tasks, not (just) work for Ops.
• „We’ ll push the data in whatever form and Ops will take care of the scaling “
4. ELASTICSEARCH BSICS
• “You know, for search”
• Indexing documents (document = anything)
• Full-text search and keyword search
• Allows for large clusters
• Licensing issues
5. USE-CASE: TIME-SERIES DATA
• Indexing events (logs, metrics, etc.)
• Wide-spread and widely applicable scenario
• Documents almost always have a timestamp
8. LIMITING FACTORS
• One shard shouldn’t be to large
• Ideally between 10 and 50 GB; otherwise recovery after failure may not work
• The number of shards on a node is limited by RAM
• Lucene segments are append-only
• A large number of segments reduce performance
9. MULTITENANCY
• Cluster-per-tenant
• Heavy for administrations
• No real multitenancy
• Expensive
• Index-per-tenant
• Also heave for administration
• Doesn’t scale well
• Tenant-based routing
• Recommended in most cases
10. TENANT-BASED ROUTING
• _routing=<tenantId> or _routing=<tenantOwnedResourceId>
• E.g.. userId or dataSourceId
• Routing parameter designates which shard to be used for storing the document
• _routing for search requests tells Elasticsearch where to look for the data =>
faster search
• shard_num = hash(_routing) % num_primary_shards
• mappings._routing.required: true
11. STRUCTURE OF INDEXED DATA
• One field can have only one type
• The type is determined on index creation or on first indexed document with that
field
• User1 creates custom param “duration” of type String
• User2 wants to create “duration” of a numeric type -> error
• Solution: custom parameter hierarchies by type: params, numericParams,
dateParams, …
12. SCALABILITY
• „We add more machines and it’s good“?
• Recommended shard size (10-50 GB)
• We can’t change shards on a running index
• Lucene Segments are read-only:
• Deleting a document = bad
• Updating a document = bad
13. OPTIONS FOR STRUCTURING INDEXES
• We need a structure to allow indexing and searching in an arbitrarily large amount
of data
• One big, ever-growing index
• Convenient for small amounts of data, but faces all scalability problems
• Index-per-day / index-per-week / index-per-size
• Index-per-day-per-retention
• Rollover
• Deletion should be done by deleting whole indexes, not individual documents
14. MANY INDEXES FOR SEARCH, ONE FOR
INDEXING
• One search query can be directed to many indexes based on an index alias
• Supporting one (or several) active indexes for ingesting documents
• All other indexes– read-only
• This solves the problem with:
• Growing data and growing size of shards
• Deleting old data
15. EFFECTIVE INDEXING
• In real time (problem: too many requests to Elasticsearch)
• Storing in a database and indexing with a batch job
• Message queue (complex to implement) (we use Kafka)
• In-memory queue (might lose data)
• Batch-indexing when a given size or time threshold is reached
• Hybrid: bulk processing + database
• Quick indexing with in-memory queue + subsequent check based on the data in the database
• Avoid updates (=delete + insert)
16. CONCLUSION
• Elasticsearch is easy to get running
• …and complex for scaling
• Changes to a production setup are hard
• We must not throw scalability and multitenancy tasks to the Ops teams – they are
application problems
• Elasticsearch internals impose unintuitive limitations (“The law of leaky
abstractions”)