Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Ivy Zhu, Research Scientist, Intel at MLconf SEA - 5/01/15

1,317 views

Published on

Model-Based Machine Learning for Real-Time Brain Decoding: Neurofeedback derived from real-time functional magnetic resonance imaging (rtfMRI) is promising for both scientific applications, such as uncovering hidden brain networks that respond to stimulus, and clinical applications, such as helping people cope with brain disorders ranging from addiction to autism. One of the greatest challenges in applying machine learning to real time brain “decoding” is that traditional methods fit per-voxel parameters, leading to large computational problems on relatively small datasets. As such, it is easy to over-fit parameters to noise rather than the desired signals. Bayesian model-based hierarchical topographical factor analysis (HTFA) solves this problem by uncovering low-dimensional representations (latent factors) of brain images, fitting parameters for latent factors (rather than voxels) while removing the false assumption that all voxels are independent. In this talk, we’ll discuss the promise of using this and other model-based machine learning to better understand full-brain activity and functional connectivity. And we’ll show how Intel Labs and its partners are combining neuroscience and computer science expertise to further extend such algorithms for real-time brain decoding.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Ivy Zhu, Research Scientist, Intel at MLconf SEA - 5/01/15

  1. 1. Model-based machine learning for real-time brain decoding Ivy Zhu Intel Labs
  2. 2. 2
  3. 3. Why bother? 3
  4. 4. Functional MRI (fMRI) 4 metabolic brain anatomical brain • Non-invasive observation • Observation-based inference
  5. 5. Brain Image Analysis/Decoding 5 • Huge amount of data • 1 volume per scan period (1~2s) • 100K ~150K voxels per volume • 100’s ~ 1000’s scans per experiment • Need sophisticated preprocessing to denoise • Thermal and system noise from scanner HW • Head motion, respiration, heart beat, etc., physiological processes • Neuronal activity related to non-task-related brain process • Prone to overfitting – typically number of observations < number of features
  6. 6. 6 General Linear Model (GLM) General linear model Statistical parametric map (SPM) Design matrix, Sm Statistical inference Realignment Smoothing Normalisation Image time-series Template Kernel Y = ( Σ hm conv Sm) + ε hm i = bi . βm i Haemodynamic Response Function (HRF) And its partial derivatives Preprocessing to denoise
  7. 7. 7 Voxels are not independent. Haxby et al. (2001), Science
  8. 8. 8 Brain networks are complicated and dynamic. Turk-Browne, N.B. (2013) Functional interactions as big data in the human brain. Science 342, 580-584.
  9. 9. 9 Can we have a model that describes local and global spatial dependencies, as well as dynamic brain networks?
  10. 10. 10 Topographic Factor Analysis (TFA) Manning JR, Ranganath R, Norman KA, Blei DM (2014) Topographic Factor Analysis: A Bayesian Model for Inferring Brain Networks from Neural Data. PLoS ONE 9(5): e94914. doi:10.1371/journal.pone.0094914
  11. 11. 11 TFA Matrix Representation Local Spatial Dependencies Global Dependencies Brain Networks
  12. 12. 12 TFA discovers latent factors. Manning JR, Ranganath R, Norman KA, Blei DM (2014) Topographic Factor Analysis: A Bayesian Model for Inferring Brain Networks from Neural Data. PLoS ONE 9(5): e94914. doi:10.1371/journal.pone.0094914
  13. 13. 13 TFA discovers brain networks. Manning JR, Ranganath R, Norman KA, Blei DM (2014) Topographic Factor Analysis: A Bayesian Model for Inferring Brain Networks from Neural Data. PLoS ONE 9(5): e94914. doi:10.1371/journal.pone.0094914
  14. 14. 14 How can we discover factors common amongst humans while preserving key individual differences?
  15. 15. 15 Hierarchical Topographic Factor Analysis (HTFA) Manning JR, Stachenfeld K, Ranganath R, Turk-Browne N, Norman KA, Blei DM. A probabilistic approach to full-brain functional connectivity. Submitted to PNAS.
  16. 16. 16 Graphical Model for HTFA Manning JR, Stachenfeld K, Ranganath R, Turk-Browne N, Norman KA, Blei DM. A probabilistic approach to full-brain functional connectivity. Submitted to PNAS.  subject  trials  V voxels  y observed voxel activations  latent factors (µ, )  weights Individual difference Global Factors
  17. 17. 17 HTFA Inference Algorithm while global template not converged and nIter < maxOuterIter do for subject = 1 to do while individual factors not converged and mIter < maxInnerIter do Estimate new weight matrix based on existing centers/widths Estimate new centers/widths based on existing weights mIter ++ end Update global template based on subject’s new centers/widths end nIter ++ end for subject = 1 to do Update weight matrix based on converged global template end
  18. 18. 18 In essence, TFA/HTFA is a type of factor analysis. How does it compare with other factor analyses?
  19. 19. 19 TFA/HTFA vs PCA vs ICA • Commonality • All decompose observed brain images into a weighted sum of components • Difference • PCA & ICA emphasize the orthogonality or independence of components. They cannot capture dynamic brain networks • TFA/HTFA relax the orthogonality/independency requirement, and with a closed-form factor function, are able to discover richer information from brain images • local dependencies • global dependencies • dynamic brain networks
  20. 20. 20 How can we bring HTFA into reality?
  21. 21. 21 Intel-Princeton Collaboration
  22. 22. 22 Bringing HTFA to Reality  Two initiatives:  Reduce the reconstruction error on small number of factors (K<10) to be lower than 5%  Reduce the overall execution time of a key case study (10 subjects, 10 sources, 200images/subject) to be less than 5mins
  23. 23. 23 HTFA reconstruction error was … Need more optimization when the number of factors is small Results are pretty good when the number of factors is large
  24. 24. 24 HTFA reconstruction error is smaller. Global Centers Before Optimization Global Centers After Optimization global centers (x) global centers (y) global centers (x) global centers (y)
  25. 25. 25 HTFA reconstruction error is smaller. True Connectivity Estimated Connectivity Before Optimization Estimated Connectivity After Optimization 5 4 3 2 1 Factor Factor 5 4 3 2 1 Factor 5 4 3 2 1 Factor
  26. 26. 26 Methods for Speeding up HTFA  Used Intel Math Kernel Library (MKL) where appropriate, e.g., single/double precision nonlinear least square solver with/without constraints  Used thread-level parallelism  Optimized matrix operation order to better utilize cache locality
  27. 27. 27 HTFA Speedup Results 0 0.2 0.4 0.6 0.8 1 1.2 1 2 3 Normalized ExecutionTIme Raw Data (#factors, #subjects, #img/subject) HTFA optimization and speedup Before Optimization After Optimization 3X to 10X speedup after optimization
  28. 28. 28 Recap  Real-time brain decoding can save lives!  Bayesian model-based HTFA is promising for decoding real-time fMRI data  Intel is working with Princeton to bring real- time full-brain decoding closer to reality
  29. 29. 29

×