• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Oscon data-2011-ted-dunning
 

Oscon data-2011-ted-dunning

on

  • 1,283 views

These are the slides for my half of the OSCON Mahout tutorial.

These are the slides for my half of the OSCON Mahout tutorial.

Statistics

Views

Total Views
1,283
Views on SlideShare
1,274
Embed Views
9

Actions

Likes
2
Downloads
15
Comments
1

3 Embeds 9

http://www.linkedin.com 4
https://www.linkedin.com 4
http://dschool.co 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel

11 of 1 previous next

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Oscon data-2011-ted-dunning Oscon data-2011-ted-dunning Presentation Transcript

    • Hands-on Classification
    • Preliminaries• Code is available from github: – git@github.com:tdunning/Chapter-16.git• EC2 instances available• Thumb drives also available• Email to ted.dunning@gmail.com• Twitter @ted_dunning
    • A Quick Review• What is classification? – goes-ins: predictors – goes-outs: target variable• What is classifiable data? – continuous, categorical, word-like, text-like – uniform schema• How do we convert from classifiable data to feature vector?
    • Data FlowNot quite so simple
    • Classifiable Data• Continuous – A number that represents a quantity, not an id – Blood pressure, stock price, latitude, mass• Categorical – One of a known, small set (color, shape)• Word-like – One of a possibly unknown, possibly large set• Text-like – Many word-like things, usually unordered
    • But that isn’t quite there• Learning algorithms need feature vectors – Have to convert from data to vector• Can assign one location per feature – or category – or word• Can assign one or more locations with hashing – scary – but safe on average
    • Data Flow
    • Classifiable Data Vectors
    • Hashed Encoding
    • What about collisions?
    • Let’s write some code (cue relaxing background music)
    • Generating new features• Sometimes the existing features are difficult to use• Restating the geometry using new reference points may help• Automatic reference points using k-means can be better than manual references
    • K-means using target
    • K-means features
    • More code!(cue relaxing background music)
    • Integration Issues• Feature extraction is ideal for map-reduce – Side data adds some complexity• Clustering works great with map-reduce – Cluster centroids to HDFS• Model training works better sequentially – Need centroids in normal files• Model deployment shouldn’t depend on HDFS
    • Parallel Stochastic Gradient Descent Model I n Train Average p sub models u model t
    • Variational Dirichlet Assignment Model I n Gather Update p sufficient model u statistics t
    • Old tricks, new dogs Read from local disk• Mapper from distributed cache – Assign point to cluster Read from – Emit cluster id, (1, point) HDFS to local disk• Combiner and reducer by distributed cache – Sum counts, weighted sum of points – Emit cluster id, (n, sum/n) Written by• Output to HDFS map-reduce
    • Old tricks, new dogs• Mapper – Assign point to cluster Read from – Emit cluster id, 1, point NFS• Combiner and reducer – Sum counts, weighted sum of points – Emit cluster id, n, sum/n Written by map-reduce• Output to HDFS MapR FS
    • Modeling architecture Side-data Now via NFSI Featuren Sequential extraction Datap SGD and joinu Learning downt sampling Map-reduce