• Save
HBaseCon 2013: Using Apache HBase for Large Matrices
 

HBaseCon 2013: Using Apache HBase for Large Matrices

on

  • 2,034 views

Presented by: Gokhan Capan, Dilisim

Presented by: Gokhan Capan, Dilisim

Statistics

Views

Total Views
2,034
Slideshare-icon Views on SlideShare
1,817
Embed Views
217

Actions

Likes
12
Downloads
0
Comments
0

5 Embeds 217

http://devveri.com 170
http://www.linkedin.com 25
http://feedreader.com 15
http://cloud.feedly.com 6
http://dschool.co 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    HBaseCon 2013: Using Apache HBase for Large Matrices HBaseCon 2013: Using Apache HBase for Large Matrices Presentation Transcript

    • HBase for Dealing with Large Matrices
    • Who am I? Leads data team at Dilisim Researcher at Anadolu University
    • Machine Learning Some big problems Classifying huge text collections Recommending to millions of users Predicting links in a social network
    • Recommender Systems Recommenders input large sparse matrices How would you input a millions X millions matrix?
    • Recommender Systems m users 3.00 0.00 2.00 0.00 4.00 2.00 3.00 2.00 1.00 3.00 0.00 3.00 0.00 2.00 0.00 4.00 2.00 3.00 2.00 1.00 3.00 0.00 … 2.00 3.00 2.00 1.00 1.00 4.00 2.00 3.00 0.00 2.00 3.00 2.00 3.00 2.00 1.00 1.00 4.00 2.00 3.00 0.00 2.00 3.00… 3.00 4.00 3.00 2.00 3.00 4.00 1.00 1.00 1.00 1.00 3.00 3.00 4.00 3.00 2.00 3.00 4.00 1.00 1.00 1.00 1.00 3.00 … 0.00 0.00 0.00 0.00 4.00 1.00 3.00 4.00 2.00 1.00 0.00 0.00 0.00 0.00 0.00 4.00 1.00 3.00 4.00 2.00 1.00 0.00 … 0.00 2.00 3.00 2.00 2.00 4.00 3.00 0.00 3.00 4.00 2.00 0.00 2.00 3.00 2.00 2.00 4.00 3.00 0.00 3.00 4.00 2.00 … 4.00 2.00 1.00 4.00 2.00 3.00 3.00 0.00 3.00 1.00 2.00 4.00 2.00 1.00 4.00 2.00 3.00 3.00 0.00 3.00 1.00 2.00… 4.00 0.00 4.00 2.00 2.00 3.00 3.00 3.00 2.00 0.00 0.00 4.00 0.00 4.00 2.00 2.00 3.00 3.00 3.00 2.00 0.00 0.00… 1.00 3.00 2.00 4.00 2.00 3.00 0.00 0.00 3.00 0.00 3.00 1.00 3.00 2.00 4.00 2.00 3.00 0.00 0.00 3.00 0.00 3.00 … 1.00 4.00 1.00 1.00 3.00 4.00 2.00 2.00 0.00 1.00 4.00 1.00 4.00 1.00 1.00 3.00 4.00 2.00 2.00 0.00 1.00 4.00… 3.00 3.00 2.00 3.00 4.00 1.00 4.00 0.00 1.00 0.00 1.00 3.00 3.00 2.00 3.00 4.00 1.00 4.00 0.00 1.00 0.00 1.00… 0.00 1.00 2.00 4.00 2.00 2.00 3.00 4.00 4.00 4.00 1.00 0.00 1.00 2.00 4.00 2.00 2.00 3.00 4.00 4.00 4.00 1.00… 4.00 0.00 4.00 2.00 2.00 3.00 3.00 3.00 2.00 0.00 0.00 4.00 0.00 4.00 2.00 2.00 3.00 3.00 3.00 2.00 0.00 0.00… 1.00 3.00 2.00 4.00 2.00 3.00 0.00 0.00 3.00 0.00 3.00 1.00 3.00 2.00 4.00 2.00 3.00 0.00 0.00 3.00 0.00 3.00 … 3.00 4.00 3.00 2.00 3.00 4.00 1.00 1.00 1.00 1.00 3.00 3.00 4.00 3.00 2.00 3.00 4.00 1.00 1.00 1.00 1.00 3.00 … 0.00 0.00 0.00 0.00 4.00 1.00 3.00 4.00 2.00 1.00 0.00 0.00 0.00 0.00 0.00 4.00 1.00 3.00 4.00 2.00 1.00 0.00 … …………………………………………………………………………………………………………………… . …………………………………………………………………………………………………………………… . ………………………………………………………………………………………………………………… … . n items Input
    • Recommender Systems State-of-the-art recommender systems learn large models One factor vector per each user and item One parameter vector (on side info) per each user and item
    • Recommender Systems m users 3.00 0.00 2.00 0.00 4.00 2.00 3.00 2.00 1.00 3.00 0.00 … 2.00 3.00 2.00 1.00 1.00 4.00 2.00 3.00 0.00 2.00 3.00 … 3.00 4.00 3.00 2.00 3.00 4.00 1.00 1.00 1.00 1.00 3.00 … 0.00 0.00 0.00 0.00 4.00 1.00 3.00 4.00 2.00 1.00 0.00 … 0.00 2.00 3.00 2.00 2.00 4.00 3.00 0.00 3.00 4.00 2.00 … 4.00 2.00 1.00 4.00 2.00 3.00 3.00 0.00 3.00 1.00 2.00 … 4.00 0.00 4.00 2.00 2.00 3.00 3.00 3.00 2.00 0.00 0.00 … 1.00 3.00 2.00 4.00 2.00 3.00 0.00 0.00 3.00 0.00 3.00 … 1.00 4.00 1.00 1.00 3.00 4.00 2.00 2.00 0.00 1.00 4.00 … 3.00 3.00 2.00 3.00 4.00 1.00 4.00 0.00 1.00 0.00 1.00 … 0.00 1.00 2.00 4.00 2.00 2.00 3.00 4.00 4.00 4.00 1.00 … ………………………………………………………… . ………………………………………………………… . ………………………………………………………… . n items Input User Model Item Model m x k n x k 0.54 0.48 0.83 0.75 0.28 … 0.02 0.29 0.99 0.85 0.68 … 0.05 0.53 0.60 0.98 0.19 … 0.52 0.47 0.50 0.12 0.98 … 0.26 0.39 0.29 0.91 0.50 … 0.15 0.43 0.66 0.07 0.51 … 0.52 0.36 0.01 0.87 0.53 … …………………………. . ………………………….. . …………………………... . 0.93 0.78 0.56 0.77 0.75 … 0.21 0.44 0.99 0.01 0.00 … 0.04 0.42 0.36 0.72 0.19 … 0.77 0.07 0.24 0.67 0.87 … 0.42 0.79 0.62 0.80 0.79 … 0.42 0.32 0.26 0.50 0.85 … 0.94 0.76 0.93 0.34 0.46 … …………………………. . ………………………….. . …………………………... .
    • Learning Process What does a machine learning algorithm require to do with that matrix?
    • Machine Learning - Techniques Batch Learning All parameters are updated once per iteration
    • Machine Learning - Techniques Batch Learning Updates can be calculated in parallel using MapReduce (SequenceFile might be enough)
    • Machine Learning - Techniques Batch Learning Output model should provide random access to rows
    • Machine Learning - Techniques Online Learning Parameters are updated per training example
    • Machine Learning - Techniques Online Learning Each update results in updates in a row Needs random access while learning
    • Machine Learning - Techniques Online Learning Output model should provide random access to rows
    • Deployment Process How do you decide to deploy a machine learning model in production?
    • Machine Learning - Deployment Usual process Works good? Deploy in production Experiment on prototype Y N
    • Machine Learning - Deployment How would you turn your prototype into production easily? Common matrix interface for in- memory and persistent versions
    • HBase Backed Matrix Implements Mahout matrix Dense or sparse
    • HBase Backed Matrix Random access to cells Random access to rows Iteration over rows Lazy loading while iterating
    • HBase Backed Matrix Common interface for prototype and product Easy to deploy (Model already persisted)
    • HBase Backed Matrix Matrix operations with existing mahout- math library
    • Logical Schema Composite row keys: 12_0: 12_9: 12_22000: data:value:0.41 data:value:0.41 data:value:0.41
    • Logical Schema Composite row keys: Row access by scan Cell access by get Atomic row update should be handled in application
    • Logical Schema Row indices as row keys 12: data:0:0.41 data:22000:0.41data:9:0.41
    • Logical Schema Row indices as row keys Atomic updates are handled automatically
    • Speed – Cell access/write GET SET row index as row key composite row key
    • Speed – Row access/write GET SET row index as row key composite row key
    • Code github.com/gcapan/mahout/tree/hbase-matrix
    • Future Work MatrixInputFormat Might replace SequenceFile based MapReduce inputs
    • Future Work – A little digression Recommender Systems Calculating score for a user-item pair is easy with HBaseMatrix
    • Future Work – A little digression Recommender Systems top-N recommendation? All candidate items for a user in the user row as a nested entity (See Ian Varley's HBase Schema Design)
    • Thank you!