• Save
Solbase
Upcoming SlideShare
Loading in...5
×
 

Solbase

on

  • 3,762 views

Solbase is an exciting new open-source real-time search system being developed at Photobucket.com to service the over 30 million daily search requests Photobucket handles. Solbase is based upon ...

Solbase is an exciting new open-source real-time search system being developed at Photobucket.com to service the over 30 million daily search requests Photobucket handles. Solbase is based upon Lucene, Solr and HBase.

Statistics

Views

Total Views
3,762
Views on SlideShare
3,761
Embed Views
1

Actions

Likes
13
Downloads
0
Comments
4

1 Embed 1

http://us-w1.rockmelt.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • Really excited to learn about this new open source framework. Congrats team.
    Are you sure you want to
    Your message goes here
    Processing…
  • @bcoverston We've started off of Lucandra project to get the idea of how lucene index will work in NoSql database. However, since we started coding we have gone far away from Lucandra. Solbase leverages heavily on HBase scan API for efficient query performance. Also it has write thru cache for term vectors with smart byte manipulation for garbage collection reason. We've also have to modify Lucene and Solr to overcome some of its inherent memory issues.

    I guess we can give some credit to Lucandra project. However, at this point we feel Solbase became its very own open source project and it is far away from a straight port.

    I have updated slide to include link to github repos for Solbase. so check it out and see what you think.
    Are you sure you want to
    Your message goes here
    Processing…
  • Looks like a straight port from Solandra to me. Would be nice if you gave Jake Luciani and the Solandra project some credit.
    Are you sure you want to
    Your message goes here
    Processing…
  • open source search engine with solr and hbase. Amazing stuff!
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • MATT
  • Simply put, solbase is an open-source, real-time search platform based on Lucene, Solr and Hbase
  • Hello, I’m Matt Green, Principal Engineer at Photobucket, and I’m here with my colleage, Koh Oh
  • With the explosion in photo taking, new product offerings launch daily, some with similar or extended qualities and features, PB is still the largest, but has suffered in brand recognition and purpose We are now seeing last years wave of new entrants struggling to get users and are frequently reaching out to us with acquisition requests
  • Why are we so interested in search? Search is X% of total page views at Photobucket, comprising around Y million requests per day We’re only indexing 500 million images because we only index the public ones with metadata, and because of the limitations of the previous architecture Comes out to about 400 requests per second, sometimes more, sometimes less So lets do a little background on search
  • Simplest analogy is that an inverted index is Much like the index of a book, a lexicographically ordered set of ‘terms’ (words in a book index) with an ordered list of documents (pages to follow the book analogy) that pertain to that ‘term’.
  • Unlike a book index, we want more information than just where the terms occur; we want to know how relevant that term is for that document, so that we can order the results based on most relevant Lots of scoring techniques, but the one we use, the default methodology for Lucene is tf-idf Tf - the more frequent a term occurs in a document, the greater its score Idf – the greater the occurrence of a term in different documents, the lower its score Normalization – a term matched in documents with less total terms will have a higher score
  • KOH It uses same basic search concepts Matt’s been talking about. Inverted index and scoring system. On top of that, It includes parsers, analyzers and indexers. So out of the box, you have full featured text search engine.
  • Index files are stored under local file system. And it heavily relies on os disk caching. So when adding new docs, lucene suggest to optimize indices after all of new docs been added to existing In a nut shell, Solbase basically swapped out locally stored index files with HBase
  • Solr basically sits on top of lucene leveraging lucene’s core search features. It also has rich feaures like faceting (categorizing docs), distributed search, update api, geo search, etc At photobucket, we’ve built our very first search infrastructure based on lucene only. As users uploaded more and more photos to our site, our index size grew too big. Stand alone lucene search server wasn’t able to handle all of medias we indexed. That’s when we introduced Solr for its distribute search feature.
  • We are using nginx and squid cache to increase our cache hit ratio and for load balancing purpose
  • Since number of medias we have to index grows daily, we need scalable data source we can shove all of these indexable items into. And obviously we know that Hbase is capable of storing tera bytes and peta bytes of data easily. Also it is used by many large scale companies: Facebook, Yahoo, Google (through their big-table implementaiton) which is always good thing.
  • Scan: lexicographic range query between 2 specified keys Leveraging lexicographical order as we build index. We can do quick scan on given term quickly cause they are in same region
  • MATT Why Solbase? The existing Photobucket architecture was using Lucene and Solr and was handling the volume, why create a new platform? 4 main reasons: Memory issues with existing platform, indexing time, Speed and capacity, flexibililty
  • Lucene’s Field cache for sorting and fiiltering became very problematic for us Each field is ‘cached’ in an array that is allocated to the size of the maximum document ID As an example, our current document space is 500 million documents. Each ‘sortable’ or ‘filterable’ field (eg document creation date) is an integer, so an array is created with 500 million integers, or 2 gigabytes. Since it is a weak reference, as other usage in the system grows and gets close to the max VM memory size, the array is garbage collected (which is expensive). But the next sorted search (all of PB’s searches) will immediately allocate and populate (from the index) the array. This backs up all requests and creates a vicious cycle of large memory creation and garbage collection, until the VM eventually melts down
  • Every 100 ms improvement in response time equates to approximately 1 extra page view per visit. Ends up being hundreds of millions of extra page views per month Even a small speed improvement can equal a large amount of page views and hence money Lucene is designed and architected around indices that are stored in large files. It’s caching mechanism (other than the query cache) relies on the operation system to have the files cached in memory When memory is low, the OS removes the files from memory, creating potentially serious performance degredation There are other solutions, such as caching filters, etc, but none we found compelling for real time
  • Example: search by user album, file size, type, image dimensions, geo, etc
  • Modify Lucene and Solr to use Hbase as the source of index and document data. We wanted to continue to use the pieces of Lucene and Solr that were working great for us, such as the analyzer, parser, the sharding architecture. Originally, we started thinking that we could sub class Lucene and Solr to plug in the behavior we wanted. This quickly grew out of control and we ended up forking off our own custom versions of Lucene and Solr.
  • First thing we had to do was come up with a way of storing the indices in hbase in a performant way. As we mentioned before, a big-table like data base shares some conceptual similarity with an inverted index, mainly it has a set of ordered keys that can be quickly scanned to access ordered data Our first table is the Term/Document table This table consists of a set of rows where each row specifies a term/document combination and its accompanying metadata. Variable length portions of the key which are strings (the field, term) are delimited by the 0xffff four bytes All data is encoded in the key, No column families or columns. Through experimentation we came to conclusion that this was the fastest and most efficient implementation Other possibility was key of 0xfff , a column family of ‘document’ with a column qualifier of ‘document id’, and then the column data would be the metadata. This would be a more ‘canonical’ use of a big-table like database. It turns out that due to the way the Hbase client works, etc, this was not as efficient at our scale Also tried other hybrids of the two types of layouts, including chunking; nothing was as good as a key-only
  • Metadata consists of a byte wherin each bit indicates the existence of other metadata: A potential normalization byte a potential ‘positions’ vector (numeric positions of the term within the document) an ‘offsets’ vector and 5 sort/filter fields (8 bits in total) All integer metadata is compacted in variable length integer compaction It is possible that a Term/Document row could have none of this metadata, then the size of the metadata would be one byte, all zeros Long term we might change this to allow an arbitrary amount of filter fields, but for now 5 were adequate Reason why we use flag – to compact data
  • The Document table is simply a document id with a set of terms as columns
  • KOH Now that we have setup our basic data layout, we’d like to query against our inverted index and fetch some data out of it. Since all of keys are already lexicographically ordered, if we can build begin and end keys given term, we can efficiently fetch data out of hbase table. So when a term is requested by a query, an Hbase scan is created for that term, using a beginning key of 0xffff0xffff0x0000 and an end key of 0xffff0xffff0xffff. This retrieves all of the documents and associated metadata and the Hbase client allows this to be done in a cursored way Information is then stored in a single byte array which is used as the cached value of the query.
  • In Solr, each instance of Solr has to have physical index file under local file system. Adding a new shard in solr means, you’d have to split existing indices into half and distribute across all of sharding machines. In Solbase, indices are accessed via central database, Hbase, we can easily divide data set into smaller chunk using start and end keys. Therefore, it’s allowed us to create dynamic sharding so to speak. For example, if there were 10 shards and a total number of docs in our indices is 100. then the first shard would be responsible for documents 1-10, the second for 11-20, third for 21-30 and so on. We are going to run on 16 shards spread among 4 servers, with 4 clusters.
  • Encoded Metadata is already using a single byte to embed normalization, positions, and offsets. we have 5 extra bits left and turned them into indicate if term document has sort or filter values. Each Term/Document data object size got slightly bigger, but we now have solved Lucene’s memory constraints when using sort/filter queries. Plus it is very fast, there is no lookup of a sort value for each document since the sorts and filters are embedded in the index
  • skip if everyone knows - Brief explanation of map/reduce framework – distributed processing architecture pioneered by google Large data set can be easily indexed initially using map/reduce framework. HDFS can distribute large data set, and map/reduce framework will process large data set in distributed processing manner. For real time, new set of data can be sent over to Solbase via Solr update api. In our use case, we have cron to fetch delta data every so often, preprocess those data and send them over to Solbase Solbase has to modify cache object properly and finally do a store into Hbase index tables.
  • MATT Cache is a single HashMap based LRU cache. Currently we have determined that a per-shard size of 2 million terms and 5 million documents max size Cache is dynamically updated in-place for speed. Cache has a timer on a per-term basis where the version of the term is checked and re-loaded in background threads when it is invalid. Should be easily pluggable and we have considered the possibility of using something like EHCache
  • As mentioned earlier, the field cache was problematic, had to extensively modify Lucene to add the embedded sorts and filters Solr had a large inefficiency were it would fetch all docs up to the requested start point; for example, if the query requested results 100-120, Solr would fetch the first 99 documents unnecisarily, adding a large unnecessary overhead. We modified Solr to remove this Hbase client is inefficient with large concurrency and became seriously bottlenecked. We modified the client to add a socket pooling mechanism. This has been contributed back to Hbase version 0.92
  • As mentioned earlier, the field cache was problematic, had to extensively modify Lucene to add the embedded sorts and filters Solr had a large inefficiency were it would fetch all docs up to the requested start point; for example, if the query requested results 100-120, Solr would fetch the first 99 documents unnecisarily, adding a large unnecessary overhead. We modified Solr to remove this Hbase client is inefficient with large concurrency and became seriously bottlenecked. We modified the client to add a socket pooling mechanism. This has been contributed back to Hbase version 0.92
  • From running a true production load test we have gotten the following results: Largest term, ‘me’ has 14 million documents out of a total document space of 500 million, takes 13 seconds to load from Hbase. From cache, ‘me’ has a response time of 500 ms Average query time for native Solr/Lucene (including squid) : 169 ms Average query time for Solbase: 109 ms or 35% decrease With a full production load, we are able to process approximately 300 real-time updates per second
  • Why Solbase? The existing Photobucket architecture was using Lucene and Solr and was handling the volume, why create a new platform? 4 main reasons: Memory issues with existing platform, indexing time, Speed and capacity, flexibililty
  • From running a true production load test we have gotten the following results: Largest term, ‘me’ has 14 million documents out of a total document space of 500 million, takes 13 seconds to load from Hbase. From cache, ‘me’ has a response time of 500 ms Average query time for native Solr/Lucene (including squid) : 169 ms Average query time for Solbase: 109 ms or 35% decrease With a full production load, we are able to process approximately 300 real-time updates per second
  • Facebook and twitter like social network built around user media Internal reporting tool leveraging big table implementation. Runs map/reduce daily to generate daily report

Solbase Solbase Presentation Transcript

  • What is Solbase?
  • Who are we?
  • • What and who is Photobucket• A brief overview of some relevant basic search concepts• A brief overview of Lucene, Solr and HBase• Solbase architecture and implementation• Where we’re at, how we’re doingDiscussion Overview
  • • Photobucket is the most-visited photo site with 23.4 Million UVs • Over 9 Billion photos stored! • Users upload 4 Million images per day! • Photobucket users spend more time than any other photo site with 3.8 Avg mins/visit • 2.0 Million avg daily visitors - more daily visits than Flickr and Picasa combined Photobucket OverviewSources: 1comScore May 2011, 2Internal data
  • 23.4M UVs19.7M UVs 9.9M UVs 9.5M UVs 7.9M UVs 6.0M UVs 1.6M UVs
  • • 40% of total page views• 500 million ‘docs’ or image metadata• 120 Gigabyte size• 35 million requests per daySearch at Photobucket
  • • Term – A searchable unit of language used as part of a search query – e.g. ‘cosmonaut’ or ‘space traveler’• Doc or Document – The unit of searchable content – e.g. For Google it’s a web page – For Photobucket it’s an image or videoTerminology
  • ‘blue’ : page 1, page 8, page 54, page 207‘pants’ : page 3, page 9, page 54, page 222‘red’ : page 8, page 7, page 22, page 54‘sweater’ : page 8, page 7, page 22, page 54Search Concepts – Inverted Index
  • • Term-frequency inverse-doc or tf-idf scoring – tf (term frequency): sqrt(number of times a term appears in a document) – Idf (inverse doc frequency): log(total number of documents/number of occurrences of a term in all documents)• Length normalization: 1/sqrt(total terms in a given document)Search Concepts - Scoring
  • Lucene is an open source, high- performance, full-featured text search engine library written in JavaWhat is Lucene?
  • Solr is an open source, high performance enterprise search serverWhat is Solr?
  • Web Servers nginx nginx nginx Squid Squid Squid Squid Squid Squid Cache Cache Cache Cache Cache CacheSolr Cluster Solr Cluster Solr Cluster Solr Cluster Solr Cluster Solr Cluster Shard Cluster Solr Shard Shard Shard Shard Cluster Solr Shard Shard Shard Shard 1 Shard 2 Shard 3 Shard 4 Shard 1 Shard 2 Shard 3 Shard 4 Shard 1 Shard 2 Shard 3 Shard 4 Shard 1 Shard 2 Shard 3 Shard 4 Shard 1 Shard 2 Shard 3 Shard 4 Shard 1 Shard 2 Shard 3 Shard 4 1 2 3 4 1 2 3 4Previous Solr/Lucene Architecture
  • HBase is a distributed, pure-java big- table like database built upon Hadoop componentsWhy we choose HBase?
  • • Scan – Range query between start and end keys – Keys already ordered lexicographically – Efficient to fetch index data given termWhy we choose HBase
  • • Memory issues• Indexing time• Speed• Capacity and ScalabilityWhy Solbase?
  • • Field Cache – Sortable and filterable fields stored in a java array the size of the maximum document number• Example – Every doc is sorted by an integer field, for 500 million documents the array is 2 GB in sizeLucene Memory Issues
  • • Previous architecture took 15-16 hours to rebuild the indices (full build, 500 million documents, 120GB total size)• We wanted to provide near real-time updatesIndexing Time
  • • Every 100 ms improvement in response time equates to approximately 1 extra page view per visit.• Can end up being hundreds of millions of extra page views per monthSpeed
  • • Impractical to add significant number of new docs and data (Geo, Exif, etc)• Difficult to divide data set to create brand new shard• Fault tolerance is not built inCapacity & Scalability
  • Modify Lucene and Solr to use HBase asthe source of index and documentdataThe Concept
  • Term/Document Table<field><delimiter><term 1><delimiter>document id 1><encoded metadata>Example:<contents>0xffffffff<beach>0xffffffff<document id 2><encoded metadata><contents>0xffffffff<beach>0xffffffff<document id 3><encoded metadata><author>0xffffffff<luke>0xffffffff<document id 1><encoded metadata><author>0xffffffff<luke>0xffffffff<document id 2><encoded metadata><author>0xffffffff<luke>0xffffffff<document id 3><encoded metadata>…Data Layout
  • Term/Document Encoded Metadata inTerm/Document Table<existence flag byte>(<normalization byte>)(<positions vector>)(<offsets vector>)(<sort/filter field 1>)(<sort/filter field 2>)(<sort/filter field 3>)(<sort/filter field 4>)(<sort/filter field 5>)Data Layout
  • Document table – Key : document ID – Column family: field : column qualifier: termData Layout
  • Term Queries are HBase range scansStart key<field><delimiter><term><delimiter><begin doc id>0x00000000End key<field><delimiter><term><delimiter><end doc id>0xffffffffQuery Methodology
  • Solr Sharding MasterSolbase Sharding Master Shard Shard Shard Shard HBaseSolbase – Distributed Processing
  • • Extra bits in Encoded Metadata • Solved Lucene’s sort/filter field cache issueSolbase – Sorts & Filters
  • • Initial Indexing – Leveraging Map/Reduce Framework• Real-Time Indexing – Using Solr’s update APISolbase – Indexing Process
  • Simple HashMap based LRU cache – Dynamically updated – Back ground reloaded – Time-out settings – PluggableSolbase - Caching
  • Web Servers Solbase Cluster Solbase Cluster Solbase Cluster Solbase Cluster Shard Shard Shard Shard Shard Shard Shard Shard 13~1 Shard 13~1 Shard 1~4 Shard 5~8 Shard 9~12 Shard 1~4 Shard 5~8 Shard 9~12 Shard 6 13~1 6 13~1 1~4 5~8 9~12 1~4 5~8 9~12 6 6 HBase cluster HBase cluster Region Region Region Region Region Region server server server server server server Regio Regio Regio Regio Regio Regio n n n n n n server server server server server serverSolr to Solbase
  • • Replaced Lucene index file with distributed database, Hbase• Overcame Lucene’s inherent limitations (memory issues) with embedded sort/filter fields• Moved indexing process to map/reduce framework for faster processing timeSummary of what we did
  • • Provided Real time indexing capability• Built more flexible/performant caching layer• Fixed Solr document fetching issue• Fixed HBase client socketSummary of what we did
  • • Term ‘me’ takes 13 seconds to load from HBase, 500 ms from cache – ‘me’ has ~14M docs, the largest term in our indices• Most terms not in cache take < 200 ms• Most cached terms take < 20 ms• Average query time for native Solr/Lucene: 169 ms• Average query time for Solbase: 109 ms or 35% decrease• ~300 real-time updates per secondResults
  • • Memory issues• Indexing time• Speed• Capacity and ScalabilityWhy Solbase?
  • • Geo-search• Other data products within Photobucket, outside of search, as a general query engine for large data setsNext Steps
  • • Activity Stream• Reporting ToolOther project based on HBase
  • • https://github.com/Photobucket/Solbase• https://github.com/Photobucket/Solbase-• https://github.com/Photobucket/Solbase-Repo
  • Q&A