• Like
  • Save

HBaseCon 2013: Using Apache HBase for Large Matrices

  • 1,803 views
Uploaded on

Presented by: Gokhan Capan, Dilisim

Presented by: Gokhan Capan, Dilisim

More in: Technology , Education
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
1,803
On Slideshare
0
From Embeds
0
Number of Embeds
6

Actions

Shares
Downloads
0
Comments
0
Likes
13

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. HBase for Dealing with Large Matrices
  • 2. Who am I? Leads data team at Dilisim Researcher at Anadolu University
  • 3. Machine Learning Some big problems Classifying huge text collections Recommending to millions of users Predicting links in a social network
  • 4. Recommender Systems Recommenders input large sparse matrices How would you input a millions X millions matrix?
  • 5. Recommender Systems m users 3.00 0.00 2.00 0.00 4.00 2.00 3.00 2.00 1.00 3.00 0.00 3.00 0.00 2.00 0.00 4.00 2.00 3.00 2.00 1.00 3.00 0.00 … 2.00 3.00 2.00 1.00 1.00 4.00 2.00 3.00 0.00 2.00 3.00 2.00 3.00 2.00 1.00 1.00 4.00 2.00 3.00 0.00 2.00 3.00… 3.00 4.00 3.00 2.00 3.00 4.00 1.00 1.00 1.00 1.00 3.00 3.00 4.00 3.00 2.00 3.00 4.00 1.00 1.00 1.00 1.00 3.00 … 0.00 0.00 0.00 0.00 4.00 1.00 3.00 4.00 2.00 1.00 0.00 0.00 0.00 0.00 0.00 4.00 1.00 3.00 4.00 2.00 1.00 0.00 … 0.00 2.00 3.00 2.00 2.00 4.00 3.00 0.00 3.00 4.00 2.00 0.00 2.00 3.00 2.00 2.00 4.00 3.00 0.00 3.00 4.00 2.00 … 4.00 2.00 1.00 4.00 2.00 3.00 3.00 0.00 3.00 1.00 2.00 4.00 2.00 1.00 4.00 2.00 3.00 3.00 0.00 3.00 1.00 2.00… 4.00 0.00 4.00 2.00 2.00 3.00 3.00 3.00 2.00 0.00 0.00 4.00 0.00 4.00 2.00 2.00 3.00 3.00 3.00 2.00 0.00 0.00… 1.00 3.00 2.00 4.00 2.00 3.00 0.00 0.00 3.00 0.00 3.00 1.00 3.00 2.00 4.00 2.00 3.00 0.00 0.00 3.00 0.00 3.00 … 1.00 4.00 1.00 1.00 3.00 4.00 2.00 2.00 0.00 1.00 4.00 1.00 4.00 1.00 1.00 3.00 4.00 2.00 2.00 0.00 1.00 4.00… 3.00 3.00 2.00 3.00 4.00 1.00 4.00 0.00 1.00 0.00 1.00 3.00 3.00 2.00 3.00 4.00 1.00 4.00 0.00 1.00 0.00 1.00… 0.00 1.00 2.00 4.00 2.00 2.00 3.00 4.00 4.00 4.00 1.00 0.00 1.00 2.00 4.00 2.00 2.00 3.00 4.00 4.00 4.00 1.00… 4.00 0.00 4.00 2.00 2.00 3.00 3.00 3.00 2.00 0.00 0.00 4.00 0.00 4.00 2.00 2.00 3.00 3.00 3.00 2.00 0.00 0.00… 1.00 3.00 2.00 4.00 2.00 3.00 0.00 0.00 3.00 0.00 3.00 1.00 3.00 2.00 4.00 2.00 3.00 0.00 0.00 3.00 0.00 3.00 … 3.00 4.00 3.00 2.00 3.00 4.00 1.00 1.00 1.00 1.00 3.00 3.00 4.00 3.00 2.00 3.00 4.00 1.00 1.00 1.00 1.00 3.00 … 0.00 0.00 0.00 0.00 4.00 1.00 3.00 4.00 2.00 1.00 0.00 0.00 0.00 0.00 0.00 4.00 1.00 3.00 4.00 2.00 1.00 0.00 … …………………………………………………………………………………………………………………… . …………………………………………………………………………………………………………………… . ………………………………………………………………………………………………………………… … . n items Input
  • 6. Recommender Systems State-of-the-art recommender systems learn large models One factor vector per each user and item One parameter vector (on side info) per each user and item
  • 7. Recommender Systems m users 3.00 0.00 2.00 0.00 4.00 2.00 3.00 2.00 1.00 3.00 0.00 … 2.00 3.00 2.00 1.00 1.00 4.00 2.00 3.00 0.00 2.00 3.00 … 3.00 4.00 3.00 2.00 3.00 4.00 1.00 1.00 1.00 1.00 3.00 … 0.00 0.00 0.00 0.00 4.00 1.00 3.00 4.00 2.00 1.00 0.00 … 0.00 2.00 3.00 2.00 2.00 4.00 3.00 0.00 3.00 4.00 2.00 … 4.00 2.00 1.00 4.00 2.00 3.00 3.00 0.00 3.00 1.00 2.00 … 4.00 0.00 4.00 2.00 2.00 3.00 3.00 3.00 2.00 0.00 0.00 … 1.00 3.00 2.00 4.00 2.00 3.00 0.00 0.00 3.00 0.00 3.00 … 1.00 4.00 1.00 1.00 3.00 4.00 2.00 2.00 0.00 1.00 4.00 … 3.00 3.00 2.00 3.00 4.00 1.00 4.00 0.00 1.00 0.00 1.00 … 0.00 1.00 2.00 4.00 2.00 2.00 3.00 4.00 4.00 4.00 1.00 … ………………………………………………………… . ………………………………………………………… . ………………………………………………………… . n items Input User Model Item Model m x k n x k 0.54 0.48 0.83 0.75 0.28 … 0.02 0.29 0.99 0.85 0.68 … 0.05 0.53 0.60 0.98 0.19 … 0.52 0.47 0.50 0.12 0.98 … 0.26 0.39 0.29 0.91 0.50 … 0.15 0.43 0.66 0.07 0.51 … 0.52 0.36 0.01 0.87 0.53 … …………………………. . ………………………….. . …………………………... . 0.93 0.78 0.56 0.77 0.75 … 0.21 0.44 0.99 0.01 0.00 … 0.04 0.42 0.36 0.72 0.19 … 0.77 0.07 0.24 0.67 0.87 … 0.42 0.79 0.62 0.80 0.79 … 0.42 0.32 0.26 0.50 0.85 … 0.94 0.76 0.93 0.34 0.46 … …………………………. . ………………………….. . …………………………... .
  • 8. Learning Process What does a machine learning algorithm require to do with that matrix?
  • 9. Machine Learning - Techniques Batch Learning All parameters are updated once per iteration
  • 10. Machine Learning - Techniques Batch Learning Updates can be calculated in parallel using MapReduce (SequenceFile might be enough)
  • 11. Machine Learning - Techniques Batch Learning Output model should provide random access to rows
  • 12. Machine Learning - Techniques Online Learning Parameters are updated per training example
  • 13. Machine Learning - Techniques Online Learning Each update results in updates in a row Needs random access while learning
  • 14. Machine Learning - Techniques Online Learning Output model should provide random access to rows
  • 15. Deployment Process How do you decide to deploy a machine learning model in production?
  • 16. Machine Learning - Deployment Usual process Works good? Deploy in production Experiment on prototype Y N
  • 17. Machine Learning - Deployment How would you turn your prototype into production easily? Common matrix interface for in- memory and persistent versions
  • 18. HBase Backed Matrix Implements Mahout matrix Dense or sparse
  • 19. HBase Backed Matrix Random access to cells Random access to rows Iteration over rows Lazy loading while iterating
  • 20. HBase Backed Matrix Common interface for prototype and product Easy to deploy (Model already persisted)
  • 21. HBase Backed Matrix Matrix operations with existing mahout- math library
  • 22. Logical Schema Composite row keys: 12_0: 12_9: 12_22000: data:value:0.41 data:value:0.41 data:value:0.41
  • 23. Logical Schema Composite row keys: Row access by scan Cell access by get Atomic row update should be handled in application
  • 24. Logical Schema Row indices as row keys 12: data:0:0.41 data:22000:0.41data:9:0.41
  • 25. Logical Schema Row indices as row keys Atomic updates are handled automatically
  • 26. Speed – Cell access/write GET SET row index as row key composite row key
  • 27. Speed – Row access/write GET SET row index as row key composite row key
  • 28. Code github.com/gcapan/mahout/tree/hbase-matrix
  • 29. Future Work MatrixInputFormat Might replace SequenceFile based MapReduce inputs
  • 30. Future Work – A little digression Recommender Systems Calculating score for a user-item pair is easy with HBaseMatrix
  • 31. Future Work – A little digression Recommender Systems top-N recommendation? All candidate items for a user in the user row as a nested entity (See Ian Varley's HBase Schema Design)
  • 32. Thank you!