ppt

692 views

Published on

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
692
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
13
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • the immediate consequence is a similar growth for datasets notice that these all different kinds of datasets, corresponding to different kinds of instruments
  • here’s one of those instruments responsible this is the sloan digital sky survey, which I’m affiliated with
  • Inbio! WAlmart Transactions! trend in science: toward data; trend in data: more of it, more measurements; once it’s about data, the focus starts to fall upon tools for analyzing data [sciences are becoming more data-driven; statistics can directly solve science problems]
  • here’s a cartoon of a conversation which is implicitly happening in all of these areas this is bob nichol, who wants to use this new data to ask astrophysicist-type questions here’s a machine learning/statistics type; who says ‘ah, that’s a density estimation problem’, why don’t you try a kernel density estimator – that’s the best method available – here are some of our best methods for different problems. ah, by the way, these are their computational costs, where N is the number of data. generally the best methods reuqire the most computation.
  • here’s a cartoon of a conversation which is implicitly happening in all of these areas this is bob nichol, who wants to use this new data to ask astrophysicist-type questions here’s a machine learning/statistics type; who says ‘ah, that’s a density estimation problem’, why don’t you try a kernel density estimator – that’s the best method available – here are some of our best methods for different problems. ah, by the way, these are their computational costs, where N is the number of data. generally the best methods reuqire the most computation.
  • then bob says, yeah but for scientific reasons this analysis requires at least 1 million points and the conversation pretty much stops there
  • then bob says, yeah but for scientific reasons this analysis requires at least 1 million points and the conversation pretty much stops there
  • there are 2 types of challenges at the edge of statistics and machine learning the usual issues of statistical theory, which have to do with formulation and validation of models the other challenge, which is actually always present, it hits us right in the face with massive datasets, but actually it’s always there, is the computational challenge. of the 2 types, this is where the least progress has been made, by far. my interests are in both categories, but this talk is going to be all about the computational problems. we can keep working on sophisticated statistics, but we already have sophisticated methods that we can’t compute
  • so we’re going to take the hard road, and not cop out, which is to make faster algorithms – compute the same thing, but in less cpu time. the goal of this talk is to consider how we can speed up each of these 7 machine learning and statistics methods.
  • so we’re going to take the hard road, and not cop out, which is to make faster algorithms – compute the same thing, but in less cpu time. the goal of this talk is to consider how we can speed up each of these 7 machine learning and statistics methods.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so we’re going to take the hard road, and not cop out, which is to make faster algorithms – compute the same thing, but in less cpu time. the goal of this talk is to consider how we can speed up each of these 7 machine learning and statistics methods.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • our situation is that we have the data, and we get to pre-organize it however we want, for example build a data structure on it, so that whenever someone comes along with a query point and a radius, we can quickly return the count to them. this is the most popular data structure for this kind of task.
  • now here’s
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • now that was a pointwise concept – we were asking for the density, or the height of the distribution, at a query point q. now let’s look at a way to get a global summarization of an entire distribution. we can ask how many pairs of points in the dataset have distance less than some radius r. if we plot this count for different values of the radius we get a curve like this, which we call the 2-point correlation function. this function is a characterization of the entire distribution.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • it turns out that n=2 is not enough for our purposes. here are two very different distributions that have exactly the same 2-point function and can only be distinguished by examining their 3-point function. the 3-point is very important to us because it turns out that if the standard model of cosmology and physics is correct, when we compute the 3-point function of the observed universe we should see that the 3-point (actually the reduced 3-point) function and all higher terms are zero.
  • our situation is that we have the data, and we get to pre-organize it however we want, for example build a data structure on it, so that whenever someone comes along with a query point and a radius, we can quickly return the count to them. this is the most popular data structure for this kind of task.
  • we’re going to create a recursive algorithm where at every point in the recursion we’re looking at 3 nodes, one from each tree (if we’re doing cross-correlations they could correspond to different datasets, otherwise they may just be pointers to different nodes of the same tree). now we’re going to ask how many how many matching triplets there are drawn from these 3 nodes, one point from each.
  • we’re going to create a recursive algorithm where at every point in the recursion we’re looking at 3 nodes, one from each tree (if we’re doing cross-correlations they could correspond to different datasets, otherwise they may just be pointers to different nodes of the same tree). now we’re going to ask how many how many matching triplets there are drawn from these 3 nodes, one point from each.
  • we’re going to create a recursive algorithm where at every point in the recursion we’re looking at 3 nodes, one from each tree (if we’re doing cross-correlations they could correspond to different datasets, otherwise they may just be pointers to different nodes of the same tree). now we’re going to ask how many how many matching triplets there are drawn from these 3 nodes, one point from each.
  • we’re going to create a recursive algorithm where at every point in the recursion we’re looking at 3 nodes, one from each tree (if we’re doing cross-correlations they could correspond to different datasets, otherwise they may just be pointers to different nodes of the same tree). now we’re going to ask how many how many matching triplets there are drawn from these 3 nodes, one point from each.
  • we’re going to create a recursive algorithm where at every point in the recursion we’re looking at 3 nodes, one from each tree (if we’re doing cross-correlations they could correspond to different datasets, otherwise they may just be pointers to different nodes of the same tree). now we’re going to ask how many how many matching triplets there are drawn from these 3 nodes, one point from each.
  • we’re going to create a recursive algorithm where at every point in the recursion we’re looking at 3 nodes, one from each tree (if we’re doing cross-correlations they could correspond to different datasets, otherwise they may just be pointers to different nodes of the same tree). now we’re going to ask how many how many matching triplets there are drawn from these 3 nodes, one point from each.
  • we’re going to create a recursive algorithm where at every point in the recursion we’re looking at 3 nodes, one from each tree (if we’re doing cross-correlations they could correspond to different datasets, otherwise they may just be pointers to different nodes of the same tree). now we’re going to ask how many how many matching triplets there are drawn from these 3 nodes, one point from each.
  • the biggest previous scientifically valid 3-point computation was on about 20000 points. bob ran our code on 75 million points, which we estimate would take about 150 years using the exhaustive method. now as computer scientists we’d love to know its runtime analytically. but we made a painful choice when we decided to go with these adaptive data structures. we went with them because they’re the fastest practical approach in general dimension. however there does not exist a mathematical expression which realistically describes the runtime of a proximity search with such a structure. nonetheless, we can postulate a runtime based on the recurrence suggested by our algorithm, which would be N to the log of the tuple order. that turns out to match what we see empirically pretty well. so that looks pretty good, in fact the only thing wrong with an 8 orders of magnitude speedup is that you might not ever be able to do it again in your whole career.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so we’re going to take the hard road, and not cop out, which is to make faster algorithms – compute the same thing, but in less cpu time. the goal of this talk is to consider how we can speed up each of these 7 machine learning and statistics methods.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • so these are statistics that characterize a dataset, and you can use them to infer all sorts of things about distributions. the 2-point, which appears under many names in many fields, is just the n=2 term of the n-point correlation function hierarchy, which forms the foundation for spatial statistics, and is used heavily in many sciences.
  • ppt

    1. 1. How to do Machine Learning on Massive Astronomical Datasets Alexander Gray Georgia Institute of Technology Computational Science and Engineering College of Computing FASTlab: Fundamental Algorithmic and Statistical Tools Laboratory
    2. 2. The FASTlab F undamental A lgorithmic and S tatistical T ools Laboratory <ul><li>Arkadas Ozakin: Research scientist , PhD Theoretical Physics </li></ul><ul><li>Dong Ryeol Lee: PhD student , CS + Math </li></ul><ul><li>Ryan Riegel: PhD student , CS + Math </li></ul><ul><li>Parikshit Ram: PhD student , CS + Math </li></ul><ul><li>William March: PhD student , Math + CS </li></ul><ul><li>James Waters: PhD student , Physics + CS </li></ul><ul><li>Hua Ouyang: PhD student , CS </li></ul><ul><li>Sooraj Bhat: PhD student , CS </li></ul><ul><li>Ravi Sastry: PhD student , CS </li></ul><ul><li>Long Tran: PhD student , CS </li></ul><ul><li>Michael Holmes: PhD student , CS + Physics (co-supervised) </li></ul><ul><li>Nikolaos Vasiloglou: PhD student , EE (co-supervised) </li></ul><ul><li>Wei Guan: PhD student , CS (co-supervised) </li></ul><ul><li>Nishant Mehta: PhD student , CS (co-supervised) </li></ul><ul><li>Wee Chin Wong: PhD student , ChemE (co-supervised) </li></ul><ul><li>Abhimanyu Aditya: MS student , CS </li></ul><ul><li>Yatin Kanetkar: MS student , CS </li></ul><ul><li>Praveen Krishnaiah: MS student , CS </li></ul><ul><li>Devika Karnik: MS student , CS </li></ul><ul><li>Prasad Jakka: MS student , CS </li></ul>
    3. 3. Exponential growth in dataset sizes 1990 COBE 1,000 2000 Boomerang 10,000 2002 CBI 50,000 2003 WMAP 1 Million 2008 Planck 10 Million Data: CMB Maps Data: Local Redshift Surveys 1986 CfA 3,500 1996 LCRS 23,000 2003 2dF 250,000 2005 SDSS 800,000 Data: Angular Surveys 1970 Lick 1M 1990 APM 2M 2005 SDSS 200M 2010 LSST 2B Instruments [ Science , Szalay & J. Gray, 2001]
    4. 4. 1993-1999: DPOSS 1999-2008: SDSS Coming: Pan-STARRS, LSST
    5. 5. Happening everywhere! Molecular biology (cancer) microarray chips Particle events (LHC) particle colliders microprocessors Simulations (Millennium) Network traffic (spam) fiber optics 300M/day 1B 1M/sec
    6. 6. <ul><li>How did galaxies evolve? </li></ul><ul><li>What was the early universe like? </li></ul><ul><li>Does dark energy exist? </li></ul><ul><li>Is our model (GR+inflation) right? </li></ul>Astrophysicist Machine learning/ statistics guy R. Nichol, Inst. Cosmol. Gravitation A. Connolly, U. Pitt Physics C. Miller, NOAO R. Brunner, NCSA G. Djorgovsky , Caltech G. Kulkarni, Inst. Cosmol. Gravitation D. Wake, Inst. Cosmol. Gravitation R. Scranton, U. Pitt Physics M. Balogh, U. Waterloo Physics I. Szapudi, U. Hawaii Inst. Astronomy G. Richards, Princeton Physics A. Szalay, Johns Hopkins Physics
    7. 7. <ul><li>How did galaxies evolve? </li></ul><ul><li>What was the early universe like? </li></ul><ul><li>Does dark energy exist? </li></ul><ul><li>Is our model (GR+inflation) right? </li></ul>Astrophysicist Machine learning/ statistics guy O(N n ) O(N 2 ) O(N 2 ) O(N 2 ) O(N 2 ) O(N 3 ) O(c D T(N)) R. Nichol, Inst. Cosmol. Grav. A. Connolly, U. Pitt Physics C. Miller, NOAO R. Brunner, NCSA G. Djorgovsky, Caltech G. Kulkarni, Inst. Cosmol. Grav. D. Wake, Inst. Cosmol. Grav. R. Scranton, U. Pitt Physics M. Balogh, U. Waterloo Physics I. Szapudi, U. Hawaii Inst. Astro. G. Richards, Princeton Physics A. Szalay, Johns Hopkins Physics <ul><li>Kernel density estimator </li></ul><ul><li>n-point spatial statistics </li></ul><ul><li>Nonparametric Bayes classifier </li></ul><ul><li>Support vector machine </li></ul><ul><li>Nearest-neighbor statistics </li></ul><ul><li>Gaussian process regression </li></ul><ul><li>Hierarchical clustering </li></ul>
    8. 8. R. Nichol, Inst. Cosmol. Grav. A. Connolly, U. Pitt Physics C. Miller, NOAO R. Brunner, NCSA G. Djorgovsky, Caltech G. Kulkarni, Inst. Cosmol. Grav. D. Wake, Inst. Cosmol. Grav. R. Scranton, U. Pitt Physics M. Balogh, U. Waterloo Physics I. Szapudi, U. Hawaii Inst. Astro. G. Richards, Princeton Physics A. Szalay, Johns Hopkins Physics <ul><li>Kernel density estimator </li></ul><ul><li>n-point spatial statistics </li></ul><ul><li>Nonparametric Bayes classifier </li></ul><ul><li>Support vector machine </li></ul><ul><li>Nearest-neighbor statistics </li></ul><ul><li>Gaussian process regression </li></ul><ul><li>Hierarchical clustering </li></ul><ul><li>How did galaxies evolve? </li></ul><ul><li>What was the early universe like? </li></ul><ul><li>Does dark energy exist? </li></ul><ul><li>Is our model (GR+inflation) right? </li></ul>Astrophysicist Machine learning/ statistics guy O(N n ) O(N 2 ) O(N 2 ) O(N 3 ) O(N 2 ) O(N 3 ) O(N 3 )
    9. 9. R. Nichol, Inst. Cosmol. Grav. A. Connolly, U. Pitt Physics C. Miller, NOAO R. Brunner, NCSA G. Djorgovsky, Caltech G. Kulkarni, Inst. Cosmol. Grav. D. Wake, Inst. Cosmol. Grav. R. Scranton, U. Pitt Physics M. Balogh, U. Waterloo Physics I. Szapudi, U. Hawaii Inst. Astro. G. Richards, Princeton Physics A. Szalay, Johns Hopkins Physics <ul><li>Kernel density estimator </li></ul><ul><li>n-point spatial statistics </li></ul><ul><li>Nonparametric Bayes classifier </li></ul><ul><li>Support vector machine </li></ul><ul><li>Nearest-neighbor statistics </li></ul><ul><li>Gaussian process regression </li></ul><ul><li>Hierarchical clustering </li></ul><ul><li>How did galaxies evolve? </li></ul><ul><li>What was the early universe like? </li></ul><ul><li>Does dark energy exist? </li></ul><ul><li>Is our model (GR+inflation) right? </li></ul>But I have 1 million points Astrophysicist Machine learning/ statistics guy O(N n ) O(N 2 ) O(N 2 ) O(N 3 ) O(N 2 ) O(N 3 ) O(N 3 )
    10. 10. <ul><li>State-of-the-art statistical methods… </li></ul><ul><li>Best accuracy with fewest assumptions </li></ul><ul><li>with orders-of-mag more efficiency. </li></ul><ul><li>Large N (#data), D (#features), M (#models) </li></ul>The challenge D N M Reduce data? Use simpler model? Approximation with poor/no error bounds?  Poor results
    11. 11. How to do Machine Learning on Massive Astronomical Datasets? <ul><li>Choose the appropriate statistical task and method for the scientific question </li></ul><ul><li>Use the fastest algorithm and data structure for the statistical method </li></ul><ul><li>Put it in software </li></ul>
    12. 12. How to do Machine Learning on Massive Astronomical Datasets? <ul><li>Choose the appropriate statistical task and method for the scientific question </li></ul><ul><li>Use the fastest algorithm and data structure for the statistical method </li></ul><ul><li>Put it in software </li></ul>
    13. 13. 10 data analysis problems, and scalable tools we’d like for them <ul><li>Querying (e.g. characterizing a region of space) : </li></ul><ul><ul><li>spherical range-search O(N) </li></ul></ul><ul><ul><li>orthogonal range-search O(N) </li></ul></ul><ul><ul><li>k-nearest-neighbors O(N) </li></ul></ul><ul><ul><li>all-k-nearest-neighbors O(N 2 ) </li></ul></ul><ul><li>Density estimation (e.g. comparing galaxy types) : </li></ul><ul><ul><li>mixture of Gaussians </li></ul></ul><ul><ul><li>kernel density estimation O(N 2 ) </li></ul></ul><ul><ul><li>L 2 density tree [Ram and Gray in prep] </li></ul></ul><ul><ul><li>manifold kernel density estimation O(N 3 ) [Ozakin and Gray 2008, to be submitted] </li></ul></ul><ul><ul><li>hyper-kernel density estimation O(N 4 ) [Sastry and Gray 2008, submitted] </li></ul></ul>
    14. 14. 10 data analysis problems, and scalable tools we’d like for them <ul><li>3. Regression (e.g. photometric redshifts) : </li></ul><ul><ul><li>linear regression O(D 2 ) </li></ul></ul><ul><ul><li>kernel regression O(N 2 ) </li></ul></ul><ul><ul><li>Gaussian process regression/kriging O(N 3 ) </li></ul></ul><ul><li>4. Classification (e.g. quasar detection, star-galaxy separation) : </li></ul><ul><ul><li>k-nearest-neighbor classifier O(N 2 ) </li></ul></ul><ul><ul><li>nonparametric Bayes classifier O(N 2 ) </li></ul></ul><ul><ul><li>support vector machine (SVM) O(N 3 ) </li></ul></ul><ul><ul><li>non-negative SVM O(N 3 ) [Guan and Gray, in prep] </li></ul></ul><ul><ul><li>false-positive-limiting SVM O(N 3 ) [Sastry and Gray, in prep] </li></ul></ul><ul><ul><li>separation map O(N 3 ) [Vasiloglou, Gray, and Anderson 2008, submitted] </li></ul></ul>
    15. 15. 10 data analysis problems, and scalable tools we’d like for them <ul><li>Dimension reduction (e.g. galaxy or spectra characterization) : </li></ul><ul><ul><li>principal component analysis O(D 2 ) </li></ul></ul><ul><ul><li>non-negative matrix factorization </li></ul></ul><ul><ul><li>kernel PCA O(N 3 ) </li></ul></ul><ul><ul><li>maximum variance unfolding O(N 3 ) </li></ul></ul><ul><ul><li>co-occurrence embedding O(N 3 ) [Ozakin and Gray, in prep] </li></ul></ul><ul><ul><li>rank-based manifolds O(N 3 ) [Ouyang and Gray 2008, ICML] </li></ul></ul><ul><ul><li>isometric non-negative matrix factorization O(N 3 ) [Vasiloglou, Gray, and Anderson 2008, submitted] </li></ul></ul><ul><li>Outlier detection (e.g. new object types, data cleaning) : </li></ul><ul><ul><li>by density estimation, by dimension reduction </li></ul></ul><ul><ul><li>by robust L p estimation [Ram, Riegel and Gray, in prep] </li></ul></ul>
    16. 16. 10 data analysis problems, and scalable tools we’d like for them <ul><li>7. Clustering (e.g. automatic Hubble sequence) </li></ul><ul><ul><li>by dimension reduction, by density estimation </li></ul></ul><ul><ul><li>k-means </li></ul></ul><ul><ul><li>mean-shift segmentation O(N 2 ) </li></ul></ul><ul><ul><li>hierarchical clustering (“friends-of-friends”) O(N 3 ) </li></ul></ul><ul><li>8. Time series analysis (e.g. asteroid tracking, variable objects) : </li></ul><ul><ul><li>Kalman filter O(D 2 ) </li></ul></ul><ul><ul><li>hidden Markov model O(D 2 ) </li></ul></ul><ul><ul><li>trajectory tracking O(N n ) </li></ul></ul><ul><ul><li>Markov matrix factorization [Tran, Wong, and Gray 2008, submitted] </li></ul></ul><ul><ul><li>functional independent component analysis [Mehta and Gray 2008, submitted] </li></ul></ul>
    17. 17. 10 data analysis problems, and scalable tools we’d like for them <ul><li>9. Feature selection and causality (e.g. which features predict star/galaxy) </li></ul><ul><ul><li>LASSO regression </li></ul></ul><ul><ul><li>L 1 SVM </li></ul></ul><ul><ul><li>Gaussian graphical model inference and structure search </li></ul></ul><ul><ul><li>discrete graphical model inference and structure search </li></ul></ul><ul><ul><li>0-1 feature-selecting SVM [Guan and Gray, in prep] </li></ul></ul><ul><ul><li>L 1 Gaussian graphical model inference and structure search [Tran, Lee, Holmes, and Gray, in prep] </li></ul></ul><ul><li>10. 2-sample testing and matching (e.g. cosmological validation, multiple surveys) : </li></ul><ul><ul><li>minimum spanning tree O(N 3 ) </li></ul></ul><ul><ul><li>n-point correlation O(N n ) </li></ul></ul><ul><ul><li>bipartite matching/Gaussian graphical model inference O(N 3 ) [Waters and Gray, in prep] </li></ul></ul>
    18. 18. How to do Machine Learning on Massive Astronomical Datasets? <ul><li>Choose the appropriate statistical task and method for the scientific question </li></ul><ul><li>Use the fastest algorithm and data structure for the statistical method </li></ul><ul><li>Put it in software </li></ul>
    19. 19. Core computational problems <ul><li>What are the basic mathematical operations, or bottleneck subroutines, can we focus on developing fast algorithms for? </li></ul>
    20. 20. Core computational problems <ul><li>Aggregations </li></ul><ul><li>Generalized N-body problems </li></ul><ul><li>Graphical model inference </li></ul><ul><li>Linear algebra </li></ul><ul><li>Optimization </li></ul>
    21. 21. Core computational problems Aggregations, GNPs, graphical models, linear algebra, optimization <ul><li>Querying: nearest-neighbor, sph range-search, ortho range-search, all-nn </li></ul><ul><li>Density estimation: kernel density estimation, mixture of Gaussians </li></ul><ul><li>Regression: linear regression, kernel regression, Gaussian process regression </li></ul><ul><li>Classification: nearest-neighbor classifier, nonparametric Bayes classifier, support vector machine </li></ul><ul><li>Dimension reduction: principal component analysis, non-negative matrix factorization, kernel PCA , maximum variance unfolding </li></ul><ul><li>Outlier detection: by robust L 2 estimation , by density estimation, by dimension reduction </li></ul><ul><li>Clustering: k-means, mean-shift, hierarchical clustering (“friends-of-friends”), by dimension reduction, by density estimation </li></ul><ul><li>Time series analysis: Kalman filter, hidden Markov model , trajectory tracking </li></ul><ul><li>Feature selection and causality: LASSO regression, L 1 support vector machine, Gaussian graphical models, discrete graphical models </li></ul><ul><li>2-sample testing and matching: n-point correlation, bipartite matching </li></ul>
    22. 22. Aggregations <ul><li>How it appears: nearest-neighbor, sph range-search, ortho range-search </li></ul><ul><li>Common methods: locality sensitive hashing, kd-trees, metric trees, disk-based trees </li></ul><ul><li>Mathematical challenges: high dimensions, provable runtime, distribution-dependent analysis, parallel indexing </li></ul><ul><li>Mathematical topics: computational geometry, randomized algorithms </li></ul>
    23. 23. kd -trees: most widely-used space-partitioning tree [Bentley 1975], [Friedman, Bentley & Finkel 1977],[Moore & Lee 1995] How can we compute this efficiently?
    24. 24. A kd- tree: level 1
    25. 25. A kd -tree: level 2
    26. 26. A kd -tree: level 3
    27. 27. A kd- tree: level 4
    28. 28. A kd -tree: level 5
    29. 29. A kd -tree: level 6
    30. 30. Range-count recursive algorithm
    31. 31. Range-count recursive algorithm
    32. 32. Range-count recursive algorithm
    33. 33. Range-count recursive algorithm
    34. 34. Range-count recursive algorithm Pruned! (inclusion)
    35. 35. Range-count recursive algorithm
    36. 36. Range-count recursive algorithm
    37. 37. Range-count recursive algorithm
    38. 38. Range-count recursive algorithm
    39. 39. Range-count recursive algorithm
    40. 40. Range-count recursive algorithm
    41. 41. Range-count recursive algorithm
    42. 42. Range-count recursive algorithm Pruned! (exclusion)
    43. 43. Range-count recursive algorithm
    44. 44. Range-count recursive algorithm
    45. 45. Range-count recursive algorithm fastest practical algorithm [Bentley 1975] our algorithms can use any tree
    46. 46. Aggregations <ul><li>Interesting approach: Cover-trees [Beygelzimer et al 2004] </li></ul><ul><ul><li>Provable runtime </li></ul></ul><ul><ul><li>Consistently good performance, even in higher dimensions </li></ul></ul><ul><li>Interesting approach: Learning trees [Cayton et al 2007] </li></ul><ul><ul><li>Learning data-optimal data structures </li></ul></ul><ul><ul><li>Improves performance over kd-trees </li></ul></ul><ul><li>Interesting approach: MapReduce [Dean and Ghemawat 2004] </li></ul><ul><ul><li>Brute-force </li></ul></ul><ul><ul><li>But makes HPC automatic for a certain problem form </li></ul></ul><ul><li>Interesting approach: approximation in rank [Ram, Ouyang and Gray] </li></ul><ul><ul><li>Approximate NN in terms of distance conflicts with known theoretical results </li></ul></ul><ul><ul><li>Is approximation in rank feasible? </li></ul></ul>
    47. 47. Generalized N-body Problems <ul><li>How it appears: kernel density estimation, mixture of Gaussians, kernel regression, Gaussian process regression, nearest-neighbor classifier, nonparametric Bayes classifier, support vector machine, kernel PCA, hierarchical clustering, trajectory tracking, n-point correlation </li></ul><ul><li>Common methods: FFT, Fast Gauss Transform, Well-Separated Pair Decomposition </li></ul><ul><li>Mathematical challenges: high dimensions, query-dependent relative error guarantee, parallel, beyond pairwise potentials </li></ul><ul><li>Mathematical topics: approximation theory, computational physics, computational geometry </li></ul>
    48. 48. Generalized N-body Problems <ul><li>Interesting approach: Generalized Fast Multipole Method, aka multi-tree methods [Gray and Moore 2001, NIPS; Riegel, Boyer and Gray] </li></ul><ul><ul><li>Fastest practical algorithms for the problems to which it has been applied </li></ul></ul><ul><ul><li>Hard query-dependent relative error bounds </li></ul></ul><ul><ul><li>Automatic parallelization ( THOR: Tree-based Higher-Order Reduce ) [Boyer, Riegel and Gray to be submitted] </li></ul></ul>
    49. 49. Characterization of an entire distribution? 2-point correlation r “ How many pairs have distance < r ?” 2-point correlation function
    50. 50. The n -point correlation functions <ul><li>Spatial inferences: filaments, clusters, voids, homogeneity, isotropy, 2-sample testing, … </li></ul><ul><li>Foundation for theory of point processes [Daley,Vere-Jones 1972], unifies spatial statistics [Ripley 1976] </li></ul><ul><li>Used heavily in biostatistics, cosmology, particle physics, statistical physics </li></ul>2pcf definition: 3pcf definition:
    51. 51. 3-point correlation “ How many triples have pairwise distances < r ?” r 3 r 1 r 2 Standard model: n>0 terms should be zero!
    52. 52. How can we count n -tuples efficiently? “ How many triples have pairwise distances < r ?”
    53. 53. Use n trees! [Gray & Moore, NIPS 2000]
    54. 54. “ How many valid triangles a-b-c (where ) could there be? A B C r count{ A , B , C } = ?
    55. 55. “ How many valid triangles a-b-c (where ) could there be? count{ A , B , C } = count{ A , B , C.left } + count{ A , B , C.right } A B C r
    56. 56. “ How many valid triangles a-b-c (where ) could there be? A B C r count{ A , B , C } = count{ A , B , C.left } + count{ A , B , C.right }
    57. 57. “ How many valid triangles a-b-c (where ) could there be? A B C r count{ A , B , C } = ?
    58. 58. “ How many valid triangles a-b-c (where ) could there be? A B C r Exclusion count{ A , B , C } = 0!
    59. 59. “ How many valid triangles a-b-c (where ) could there be? A B C count{ A , B , C } = ? r
    60. 60. “ How many valid triangles a-b-c (where ) could there be? A B C Inclusion count{ A , B , C } = |A| x |B| x |C| r Inclusion Inclusion
    61. 61. 3-point runtime (biggest previous: 20K ) VIRGO simulation data, N = 75,000,000 naïve: 5x10 9 sec. (~150 years) multi-tree: 55 sec. (exact) n =2: O(N) n =3: O(N log3 ) n =4: O(N 2 )
    62. 62. Generalized N-body Problems <ul><li>Interesting approach (for n-point): n-tree algorithms [Gray and Moore 2001, NIPS; Moore et al. 2001, Mining the Sky] </li></ul><ul><ul><li>First efficient exact algorithm for n-point correlations </li></ul></ul><ul><li>Interesting approach (for n-point): Monte Carlo n-tree [Waters, Riegel and Gray] </li></ul><ul><ul><li>Orders of magnitude faster </li></ul></ul>
    63. 63. Generalized N-body Problems <ul><li>Interesting approach (for EMST): dual-tree Boruvka algorithm [March and Gray] </li></ul><ul><ul><li>Note this is a cubic problem </li></ul></ul><ul><li>Interesting approach (N-body decision problems): dual-tree bounding with hybrid tree expansion [Liu, Moore, and Gray 2004; Gray and Riegel 2004, CompStat; Riegel and Gray 2007, SDM] </li></ul><ul><ul><li>An exact classification algorithm </li></ul></ul>
    64. 64. Generalized N-body Problems <ul><li>Interesting approach (Gaussian kernel): dual-tree with multipole/Hermite expansions [Lee, Gray and Moore 2005, NIPS; Lee and Gray 2006, UAI] </li></ul><ul><ul><li>Ultra-accurate fast kernel summations </li></ul></ul><ul><li>Interesting approach (arbitrary kernel): automatic derivation of hierarchical series expansions [Lee and Gray] </li></ul><ul><ul><li>For large class of kernel functions </li></ul></ul>
    65. 65. Generalized N-body Problems <ul><li>Interesting approach (summative forms): multi-scale Monte Carlo [Holmes, Gray, Isbell 2006 NIPS; Holmes, Gray, Isbell 2007, UAI] </li></ul><ul><ul><li>Very fast bandwidth learning </li></ul></ul><ul><li>Interesting approach (summative forms): Monte Carlo multipole methods [Lee and Gray 2008, NIPS] </li></ul><ul><ul><li>Uses SVD tree </li></ul></ul>
    66. 66. Generalized N-body Problems <ul><li>Interesting approach (for multi-body potentials in physics): higher-order multipole methods [Lee, Waters, Ozakin, and Gray, et al.] </li></ul><ul><ul><li>First fast algorithm for higher-order potentials </li></ul></ul><ul><li>Interesting approach (for quantum-level simulation): 4-body treatment of Hartree-Fock [March and Gray, et al.] </li></ul>
    67. 67. Graphical model inference <ul><li>How it appears: hidden Markov models, bipartite matching, Gaussian and discrete graphical models </li></ul><ul><li>Common methods: belief propagation, expectation propagation </li></ul><ul><li>Mathematical challenges: large cliques, upper and lower bounds, graphs with loops, parallel </li></ul><ul><li>Mathematical topics: variational methods, statistical physics, turbo codes </li></ul>
    68. 68. Graphical model inference <ul><li>Interesting method (for discrete models): Survey propagation [Mezard et al 2002] </li></ul><ul><ul><li>Good results for combinatorial optimization </li></ul></ul><ul><ul><li>Based on statistical physics ideas </li></ul></ul><ul><li>Interesting method (for discrete models): Expectation propagation [Minka 2001] </li></ul><ul><ul><li>Variational method based on moment-matching idea </li></ul></ul><ul><li>Interesting method (for Gaussian models): L p structure search, solve linear system for inference [Tran, Lee, Holmes, and Gray] </li></ul>
    69. 69. Linear algebra <ul><li>How it appears: linear regression, Gaussian process regression, PCA, kernel PCA, Kalman filter </li></ul><ul><li>Common methods: QR, Krylov, … </li></ul><ul><li>Mathematical challenges: numerical stability, sparsity preservation, … </li></ul><ul><li>Mathematical topics: linear algebra, randomized algorithms, Monte Carlo </li></ul>
    70. 70. Linear algebra <ul><li>Interesting method (for probably-approximate k-rank SVD): Monte Carlo k-rank SVD [Frieze, Drineas, et al. 1998-2008] </li></ul><ul><ul><li>Sample either columns or rows, from squared length distribution </li></ul></ul><ul><ul><li>For rank-k matrix approx; must know k </li></ul></ul><ul><li>Interesting method (for probably-approximate full SVD): QUIC-SVD [Holmes, Gray, Isbell 2008, NIPS]; QUIK-SVD [Holmes and Gray] </li></ul><ul><ul><li>Sample using cosine trees and stratification </li></ul></ul><ul><ul><li>Builds tree as needed </li></ul></ul><ul><ul><li>Full SVD: automatically sets rank based on desired error </li></ul></ul>
    71. 71. QUIC-SVD speedup 38 days  1.4 hrs, 10% rel. error 40 days  2 min, 10% rel. error
    72. 72. Optimization <ul><li>How it appears: support vector machine, maximum variance unfolding, robust L 2 estimation </li></ul><ul><li>Common methods: interior point, Newton’s method </li></ul><ul><li>Mathematical challenges: ML-specific objective functions, large number of variables / constraints, global optimization, parallel </li></ul><ul><li>Mathematical topics: optimization theory, linear algebra, convex analysis </li></ul>
    73. 73. Optimization <ul><li>Interesting method: Sequential minimization optimization (SMO) [Platt 1999] </li></ul><ul><ul><li>Much more efficient than interior-point, for SVM QPs </li></ul></ul><ul><li>Interesting method: Stochastic quasi-Newton [Schraudolf 2007] </li></ul><ul><ul><li>Does not require scan of entire data </li></ul></ul><ul><li>Interesting method: Sub-gradient methods [Vishwanathan and Smola 2006] </li></ul><ul><ul><li>Handles kinks in regularized risk functionals </li></ul></ul><ul><li>Interesting method: Approximate inverse preconditioning using QUIC-SVD for energy minimization and interior-point [March, Vasiloglou, Holmes, Gray] </li></ul><ul><ul><li>Could potentially treat a large number of optimization problems </li></ul></ul>
    74. 74. Now fast! very fast as fast as possible (conjecture) <ul><li>Querying: nearest-neighbor, sph range-search, ortho range-search, all-nn </li></ul><ul><li>Density estimation: kernel density estimation, mixture of Gaussians </li></ul><ul><li>Regression: linear regression, kernel regression, Gaussian process regression </li></ul><ul><li>Classification: nearest-neighbor classifier , nonparametric Bayes classifier, support vector machine </li></ul><ul><li>Dimension reduction: principal component analysis, non-negative matrix factorization , kernel PCA, maximum variance unfolding </li></ul><ul><li>Outlier detection: by robust L 2 estimation </li></ul><ul><li>Clustering: k-means, mean-shift, hierarchical clustering (“friends-of-friends”) </li></ul><ul><li>Time series analysis: Kalman filter, hidden Markov model, trajectory tracking </li></ul><ul><li>Feature selection and causality: LASSO regression, L 1 support vector machine, Gaussian graphical models, discrete graphical models </li></ul><ul><li>2-sample testing and matching: n-point correlation , bipartite matching </li></ul>
    75. 75. Astronomical applications <ul><li>All-k-nearest-neighbors: O(N 2 )  O(N), exact. Used in [Budavari et al., in prep] </li></ul><ul><li>Kernel density estimation: O(N 2 )  O(N), rel err. Used in [Balogh et al. 2004] </li></ul><ul><li>Nonparametric Bayes classifier (KDA): O(N 2 )  O(N), exact. Used in [Richards et al. 2004,2009], [Scranton et al. 2005] </li></ul><ul><li>n-point correlations: O(N n )  O(N logn ), exact. Used in [Wake et al. 2004], [Giannantonio et al 2006],[Kulkarni et al 2007] </li></ul>
    76. 76. Astronomical highlights <ul><ul><li>Dark energy evidence, Science 2003, Top Scientific Breakthrough of the year (n-point) </li></ul></ul><ul><ul><ul><li>2007 biggest 3-point calculation ever </li></ul></ul></ul><ul><ul><li>Cosmic magnification verification Nature 2005 (nonparam. Bayes clsf) </li></ul></ul><ul><ul><ul><li>2008 largest quasar catalog ever </li></ul></ul></ul>
    77. 77. A few others to note… very fast as fast as possible (conjecture) <ul><li>Querying: nearest-neighbor, sph range-search, ortho range-search, all-nn </li></ul><ul><li>Density estimation: kernel density estimation, mixture of Gaussians </li></ul><ul><li>Regression: linear regression, kernel regression , Gaussian process regression </li></ul><ul><li>Classification: nearest-neighbor classifier , nonparametric Bayes classifier, support vector machine </li></ul><ul><li>Dimension reduction: principal component analysis , non-negative matrix factorization , kernel PCA, maximum variance unfolding </li></ul><ul><li>Outlier detection: by robust L 2 estimation </li></ul><ul><li>Clustering: k-means, mean-shift, hierarchical clustering (“friends-of-friends”) </li></ul><ul><li>Time series analysis: Kalman filter, hidden Markov model , trajectory tracking </li></ul><ul><li>Feature selection and causality: LASSO regression, L 1 support vector machine, Gaussian graphical models, discrete graphical models </li></ul><ul><li>2-sample testing and matching: n-point correlation , bipartite matching </li></ul>
    78. 78. How to do Machine Learning on Massive Astronomical Datasets? <ul><li>Choose the appropriate statistical task and method for the scientific question </li></ul><ul><li>Use the fastest algorithm and data structure for the statistical method </li></ul><ul><li>Put it in software </li></ul>
    79. 79. Keep in mind the machine <ul><li>Memory hierarchy: cache, RAM, out-of-core </li></ul><ul><li>Dataset bigger than one machine: parallel/distributed </li></ul><ul><li>Everything is becoming multicore </li></ul><ul><li>Cloud computing: software as a service </li></ul>
    80. 80. Keep in mind the overall system <ul><li>Databases can be more useful than ASCII files (e.g. CAS) </li></ul><ul><li>Workflows can be more useful than brittle perl scripts </li></ul><ul><li>Visual analytics connects visualization/HCI with data analysis (e.g. In-SPIRE) </li></ul>
    81. 81. Our upcoming products <ul><li>MLPACK: “the LAPACK of machine learning” – Dec. 2008 [FASTlab] </li></ul><ul><li>THOR: “the MapReduce of Generalized N-body Problems” – Apr. 2009 [Boyer, Riegel, Gray] </li></ul><ul><li>CAS Analytics: fast data analysis in CAS (SQL Server) – Apr. 2009 [Riegel, Aditya, Krishnaiah, Jakka, Karnik, Gray] </li></ul><ul><li>LogicBlox: all-in-one business intelligence [Kanetkar, Riegel, Gray] </li></ul>
    82. 82. Keep in mind the software complexity <ul><li>Automatic code generation (e.g. MapReduce) </li></ul><ul><li>Automatic tuning (e.g. OSKI) </li></ul><ul><li>Automatic algorithm derivation (e.g. SPIRAL, AutoBayes) [Gray et al. 2004; Bhat, Riegel, Gray, Agarwal] </li></ul>
    83. 83. The end <ul><li>We have/will have fast algorithms for most data analysis methods in MLPACK </li></ul><ul><li>Many opportunities for applied math and computer science in large-scale data analysis </li></ul><ul><li>Caveat: Must treat the right problem </li></ul><ul><li>Computational astronomy workshop and large-scale data analysis workshop coming soon </li></ul><ul><li>Alexander Gray [email_address] </li></ul><ul><li>(email is best; webpage sorely out of date) </li></ul>

    ×