Your SlideShare is downloading. ×
Solbase & Real-time Activity
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Solbase & Real-time Activity

2,143
views

Published on

Solbase, the real time open-source search engine, is now available on github. Solbase was developed by Photobucket.com and is built upon Lucene, Solr and HBase. Photobucket has also recently released …

Solbase, the real time open-source search engine, is now available on github. Solbase was developed by Photobucket.com and is built upon Lucene, Solr and HBase. Photobucket has also recently released a real time community activity stream capturing the 4 million daily uploads as well as all of your friends' comments and favorite photos. The foundation of the system is HBase and also employs Kestrel queues. This talk will cover the architecture, implementation details and share many of the lessons learned while developing this real time big data system.

Published in: Technology

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,143
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • We should go over these agendas and introduce each of presenters
  • First, Koh is going to talk about Solbase.  That's our real time search engine that was built on top of Lucene, Solr, and HBase.  We started presenting Solbase about 9 months ago, and at that time we reported that our standard implementation of lucene/solr was no longer scaling to meet our needs, and our initial tests of Solbase gave us hope that we were going to solve that problem AND dramatically improve performance.  In addition we were updating our search index in real time.  Great results, but possibly the bigger news at that time was that we were planning to open source all the code.  Tonight Koh is here to deliver on that promise. The next topic we'll cover is another HBase feature developed at PB: our activity stream.  It's what you'd probably expect.  A social network feature that distributes events about photos and videos in near real time.  We've seen a number of presentations on similar features, but rarely to you see any detail on the architecture or lessons learned that would help you build your own.  Ron and Josh are going to do exactly that. But before we jump into all that... why do you care?  who is PB?  
  • We're the biggest dedicated photo site on the web and we're right next door.   We have millions of active users and billions of photos.
  • Here's a quick slide on our size compared to our peers… its a little old, but you get the idea.   We have millions of unique visitors.  
  • Over time those users have contributed half a billion public photos and videos to our search index, and we generate a boatload of social events around all that public media.
  • Lucene's Field cache for sorting and filtering became very problematic for us Turn around time for building entire set of indices took us about a day Every 100 ms improvement in response time equates to approximatey 1 extra page views Impractical to add significatn number of new docs and data 
  • In a nutch shell, Solbase have basically replaced indices stored in local filesystem to database in HBase also overcame lucene's inherent limitations. and one major one we solved is sort/filter 
  • Ron Here
  • Ron Here
  • Ron Here
  • Ron Here
  • Kestrel is open source and developed at twitter.
  • Talk about scale and real-time processing speed. Ops per second. 1 thread push 40/s all the way to hbase.
  • Talk about scale and real-time processing speed. Ops per second. 1 thread push 40/s all the way to hbase.
  • Josh Here HBase is a distributed big-table like database build upon Hadoop components leverages HDFS, Hadoop ’s distributed file system Built upon Hadoop, scales to a massive size, virtually limitless used by many large scale companies: Facebook, Yahoo, Google (through their big-table implementaiton) Ask who has used hbase
  • Josh Here HBase is a distributed big-table like database build upon Hadoop components leverages HDFS, Hadoop ’s distributed file system Built upon Hadoop, scales to a massive size, virtually limitless used by many large scale companies: Facebook, Yahoo, Google (through their big-table implementaiton) Ask who has used hbase To fix: 1. Features     column store     key/value store witih semi-structured values.      2. Why use hbase?     -horizontal scalability     -high write throughput     -millions of columns billion of rows
  • consists of master nodes with a set of region servers to distribute the data The master is the gateway interface to direct clients to the proper region server for the requested data Data is replicated among several data nodes by Hadoop ’s file system, HDFS There is ‘locational affinity’ between the region server and the data served
  • Each table consists of a row key, a set of defined column families, and an arbitrary number of qualified columns for each family Keys are store lexicographically so that range scans between two keys is extremely fast All data is binary interestingly, this is similar to the concept of the inverted index, where the ‘terms’ are lexicographically stored; this is something that we leverage in our implementation
  • Mention using lexicographical key to pre-sort data.
  • Get : single row access, similar to SQL like query by primary key Put: single row update/insert (can be done in batch) Scan: lexicographic range query between 2 specified keys
  • Back to Ron HBase optimization: scans continue to be fast, large multi-gets have been an issue.
  • HBase optimization: scans continue to be fast, large multi-gets have been an issue.
  • Transcript

    • 1.  
    • 2.
        • Doug McCuen - Director of Engineering
        • Ron White - Senior Software Engineer
        • Josh Hollander - Senior Software Engineer
        • Kyungseog Oh - Senior Software Engineer
      Who we are
    • 3. Photobucket Solbase Activity Stream Agenda
    • 4. • Photobucket is the most-visited photo site with 23.4 Million UVs • Over 9 Billion photos stored! • Users upload 4 Million images per day! • Photobucket users spend more time than any other photo site with 3.8 Avg mins/visit • 2.0 Million avg daily visitors - more daily visits than Flickr and Picasa combined Sources: 1comScore May 2011, 2Internal data Photobucket Overview
    • 5. 23.4M UVs 9.9M UVs 9.5M UVs 7.9M UVs 1.6M UVs 19.7M UVs 6.0M UVs
    • 6.
        • Upload
          • 4M images/videos upload per day
        • Search
          • Over 30M requests per day
        • Social Activity
          • 20k "Likes"/day
          • 5k comments/day
          • 10k "Follows"/day
      Sources: 1comScore May 2011, 2Internal data Photobucket Stats
    • 7. Solbase is an open-source, real-time search platform based on Lucene, Solr and HBase built at Photobucket What is Solbase?
    • 8.
        • Memory Issue
        • Indexing time
        • Speed
        • Capacity
      Why Solbase?
    • 9.
        • Overcame Lucene ’s inherent limitations (memory issues) with embedded sort/filter fields
        • Replaced Lucene index file with distributed database, HBase
        • Moved initial indexing process to map/reduce framework for faster processing time
        • Provided Real time indexing capability
      Summary of what we did
    • 10.
        • Average query time for native Solr/Lucene: 169 ms
        • Average query time for Solbase: 109 ms or 35% decrease
        • Term ‘me’ has ~14M docs
          • ‘ me’ takes 13 seconds to load from HBase, 500 ms from term vector cache
        • Most terms not in cache take < 200 ms
        • Most cached terms take < 20 ms
        • ~300 real-time updates per second
      Results
    • 11.
        • Geo-search
        • Other data products within Photobucket, outside of search, as a general query engine for large data sets
      Next Steps
    • 12. https://github.com/Photobucket/Solbase https://github.com/Photobucket/Solbase-Solr https://github.com/Photobucket/Solbase-Lucene Solbase repos
    • 13. Activity Stream is Social networking feature using HBase, Flume, Kestrel, Camel built at Photobucket What is Activity Stream?
    • 14.
        • Somebody you follow:
          • Uploads new photos or videos
          • Comments on media
          • Likes media
        • Somebody follows you
        • Somebody likes your content
        • Somebody comments on your media
      Activity Events
    • 15. Activity Events Rendered
    • 16.
        • Difficult Problem
          • Especially with &quot;real-time&quot; requirements
        • Options
          • Fan-in
            • Very slow
          • Scatter-gather
            • Used by Facebook
            • Parallelized
          • Fan-out
            • Simpler Engineering
            • Massive amounts of data (very de-normalized)
      Delivering Activities
    • 17.
        • Flume & Kestrel – how we collect user activity
        • Processor – how we fan out that activity to other users
        • Query service – providing this data back to our php front end
        • HBase – how we store tons of data
      Discussion Overview
    • 18. Activity Collection
    • 19.
        • Flume
          • Part of Hadoop Stack
          • Distributed Real-time Log processing tool
          • Collects logs written by php web servers
        • Kestrel
          • Open source, developed at twitter
          • Fast, reliable, durable queue
          • Horizontally scalable to infinity 
          • Not strongly ordered
          • Memcache protocol
      Flume & Kestrel
    • 20.
        • Camel
          • Enterprise Intergration Patterns (EIP) framework
        • Fanout Processor
          • Receive a message from queue
          • Six different processors
            • Easily configured with Camel
          • Writes copy of the activity for all users interested in that event
      Fanout Processor & Camel
    • 21.
        • HBase/PHP adapter
          • Provides a simple service interface for PHP web servers
        • Caching - In Memory
          • Consistent hashing load balancer
          • No Serialization/Deserialization penalty
        • Custom Rollup Logic
      Query Service
    • 22.
        • Throughput
          • 40 events/sec per processor thread
          • nominal load of 5/sec
        • Latency
          • Users typically see new activity within 1 second of event
          • Delete events slower
        • Responsiveness
          • 90% < 1 sec query time
          • average response 301ms
      Performance 
    • 23.
      • HBase is:
        • Based on Google's Big-Table
          • &quot;distributed, versioned, column oriented store&quot; 
          • persistent, sorted, multidimensional map
        • Pure-java implementation
        • Built on top of Hadoop
          • HDFS for storage
          • Zookeeper
        • Used extensively at Facebook, Yahoo and Stumble Upon among others.
      What is HBase?
    • 24.
        • Highly horizontally scalable
          • We store huge amounts of user activity data
          • Fanout implies duplication of that data
          • Need to be able to expand storage/servers easily
        • High write throughput
          • Our users generate a lot of activity very quickly.
          • Want fanout to be near realtime
        • &quot;Millions of columns, Billions of rows&quot;
      Why HBase?
    • 25. Hadoop/Hbase Architecture
    • 26. Schema: {row key 1 {      column family 1{        c olumn 1 {data1},           column 2 {data 2}         … }      ...}   } {row key 2 {...}} Example: {dog:spotty {owner{matt{age 41}, linda{age 41}} vaccinations{rabies{july 2011}}} {cat:fluffy {owner{doug{age 41}, heather{age 41}} vaccinations{rabies{june2011}}} HBase Tables
    • 27.
        • Key
          • salted userid + inverted timestamp
          • Scan by userid from timestamp 0 to timestamp fffffff
        • Each Activity has up to 6 items with similar data
            • Traditionally would be normalized into another table
            • No joins in HBase
            • We use multiple column families each with the same column schema
      Our Schema Design
    • 28.
        • Get
          • Single row query
        • Put
          • Single row update/insert
        • Delete
        • Scan
          • Range query between start and end keys
          • Can be filtered by column data filters
        • Batched operations (GET, PUT, DELETE)
          • Actions across multiple regions are parallelized 
        • HBase Abstraction
          • Built JDBC template like HBase wrapper classes
      Hbase Client API
    • 29.
        • Kestrel configuration
          • Pre-define queues in the config
        • Threading issues
          • HBase/Kestrel use many threads & connections
          • Set high limits for nprocs & nofiles
        • Million Follower Problem
          • Chunk large batch operations
          • Limits to Abstraction
      Challenges
    • 30.
        • Hardware configuration
          • Don't RAID
          • Dedicated cluster switch
        • Replication
          • Still in Beta
          • Needed for Disaster Recovery
          • Worked through several issues
        • Hot Regions
          • User activity is not well distributed
        • Manual Region Splitting & Major Compaction
        • Garbage collection
          • HBase memory hog
      HBase Challenges
    • 31. http://www.cloudera.com/resource/hadoop-world-2011-presentation-slides-advanced-hbase-schema-design http://www.cloudera.com/blog/2011/02/avoiding-full-gcs-in-hbase-with-memstore-local-allocation-buffers-part-1/ References
    • 32. Q&A