Machine learning is geared towards prediction. However, aside diagnosis or prognosis in the clinics, cognitive neuroimaging strives for uncovering insights from the data, rather than minimizing prediction error. I review various inferences on brain function that have been drawn using pattern recognition techniques, focusing on decoding. In particular, I discuss using generalization as a test for information, multivariate analysis to interpret overlapping activation patterns, and decoding for principled reverse inference. I give each time a statistical view and a cognitive imaging view.
Scikit-learn and nilearn: Democratisation of machine learning for brain imagingGael Varoquaux
This talk describe our efforts to bring easily usable machine learning to brain mapping. It covers both questions that machine learning can answer as well as two softwares developed to facilitate machine learning and it's application to neuroimaging.
Better neuroimaging data processing: driven by evidence, open communities, an...Gael Varoquaux
My current thoughts about methods validity and design in brain imaging.
Data processing is a significant part of a neuroimaging study. The choice of corresponding methods and tools is crucial. I will give an opinionated view how on a path to building better data processing for neuroimaging. I will take examples on endeavors that I contributed to: defining standards for functional-connectivity analysis, the nilearn neuroimaging tool, the scikit-learn machine-learning toolbox -an industry standard with a million regular users. I will cover not only the technical process -statistics, signal processing, software engineering- but also the epistemology of methods development. Methods govern our results, they are more than a technical detail.
Functional-connectome biomarkers to meet clinical needs?Gael Varoquaux
Extracting Functional-Connectome Biomarkers with Machine Learning: a talk in the symposium on how do current predictive connectivity models meet clinician’s needs?
This talk is a bit provocative and first sets visions, before bringing a few technical suggestions
Towards psychoinformatics with machine learning and brain imagingGael Varoquaux
Informatics in the psychological sciences brings fascinating challenges as mental processes or pathologies have fuzzy definition and are hard to quantify. Brain imaging brings rich data on the neural substrate of these concepts, yet it is a non trivial link.
The goal of this presentation is to put forward basic ideas of "psychoinformatics", using advanced processing on brain images to quantify better the elements of psychology.
It discusses how machine learning can bridge brain images to behavior: to describe better mental processes involved in brain activity, or to extract biomarkers of pathologies, individual traits, or cognition.
Deep neural networks have boosted the convergence of multimedia data analytics in a unified framework shared by practitioners in natural language, vision and speech. Image captioning, lip reading or video sonorization are some of the first applications of a new and exciting field of research exploiting the generalization properties of deep neural representation. This tutorial will firstly review the basic neural architectures to encode and decode vision, text and audio, to later review the those models that have successfully translated information across modalities. The contents of this tutorial are available at: https://telecombcn-dl.github.io/2019-mmm-tutorial/.
Describing latest research in visual reasoning, in particular visual question answering. Covering both images and videos. Dual-process theories approach. Relational memory.
Scikit-learn and nilearn: Democratisation of machine learning for brain imagingGael Varoquaux
This talk describe our efforts to bring easily usable machine learning to brain mapping. It covers both questions that machine learning can answer as well as two softwares developed to facilitate machine learning and it's application to neuroimaging.
Better neuroimaging data processing: driven by evidence, open communities, an...Gael Varoquaux
My current thoughts about methods validity and design in brain imaging.
Data processing is a significant part of a neuroimaging study. The choice of corresponding methods and tools is crucial. I will give an opinionated view how on a path to building better data processing for neuroimaging. I will take examples on endeavors that I contributed to: defining standards for functional-connectivity analysis, the nilearn neuroimaging tool, the scikit-learn machine-learning toolbox -an industry standard with a million regular users. I will cover not only the technical process -statistics, signal processing, software engineering- but also the epistemology of methods development. Methods govern our results, they are more than a technical detail.
Functional-connectome biomarkers to meet clinical needs?Gael Varoquaux
Extracting Functional-Connectome Biomarkers with Machine Learning: a talk in the symposium on how do current predictive connectivity models meet clinician’s needs?
This talk is a bit provocative and first sets visions, before bringing a few technical suggestions
Towards psychoinformatics with machine learning and brain imagingGael Varoquaux
Informatics in the psychological sciences brings fascinating challenges as mental processes or pathologies have fuzzy definition and are hard to quantify. Brain imaging brings rich data on the neural substrate of these concepts, yet it is a non trivial link.
The goal of this presentation is to put forward basic ideas of "psychoinformatics", using advanced processing on brain images to quantify better the elements of psychology.
It discusses how machine learning can bridge brain images to behavior: to describe better mental processes involved in brain activity, or to extract biomarkers of pathologies, individual traits, or cognition.
Deep neural networks have boosted the convergence of multimedia data analytics in a unified framework shared by practitioners in natural language, vision and speech. Image captioning, lip reading or video sonorization are some of the first applications of a new and exciting field of research exploiting the generalization properties of deep neural representation. This tutorial will firstly review the basic neural architectures to encode and decode vision, text and audio, to later review the those models that have successfully translated information across modalities. The contents of this tutorial are available at: https://telecombcn-dl.github.io/2019-mmm-tutorial/.
Describing latest research in visual reasoning, in particular visual question answering. Covering both images and videos. Dual-process theories approach. Relational memory.
Deep neural networks have revolutionized the data analytics scene by improving results in several and diverse benchmarks with the same recipe: learning feature representations from data. These achievements have raised the interest across multiple scientific fields, especially in those where large amounts of data and computation are available. This change of paradigm in data analytics has several ethical and economic implications that are driving large investments, political debates and sounding press coverage under the generic label of artificial intelligence (AI). This talk will present the fundamentals of deep learning through the classic example of image classification, and point at how the same principal has been adopted for several tasks. Finally, some of the forthcoming potentials and risks for AI will be pointed.
Deep Learning has taken the digital world by storm. As a general purpose technology, it is now present in all walks of life. Although the fundamental developments in methodology have been slowing down in the past few years, applications are flourishing with major breakthroughs in Computer Vision, NLP and Biomedical Sciences. The primary successes can be attributed to the availability of large labelled data, powerful GPU servers and programming frameworks, and advances in neural architecture engineering. This combination enables rapid construction of large, efficient neural networks that scale to the real world. But the fundamental questions of unsupervised learning, deep reasoning, and rapid contextual adaptation remain unsolved. We shall call what we currently have Deep Learning 1.0, and the next possible breakthroughs as Deep Learning 2.0.
This is part 1 of the Tutorial delivered at IEEE SSCI 2020, Canberra, December 1st (Virtual).
https://mcv-m6-video.github.io/deepvideo-2020/
Self-supervised techniques define surrogate tasks to train machine learning algorithms without the need of human generated labels. This lecture reviews the state of the art in the field of computer vision, including the baseline techniques based on visual feature learning from ImageNet data.
http://imatge-upc.github.io/vqa-2016-cvprw/
This thesis studies methods to solve Visual Question-Answering (VQA) tasks with a Deep Learning framework.As a preliminary step, we explore Long Short-Term Memory (LSTM) networks used in Natural Language Processing (NLP) to tackle Question-Answering (text based). We then modify the previous model to accept an image as an input in addition to the question. For this purpose, we explore the VGG-16 and K-CNN convolutional neural networks to extract visual features from the image. These are merged with the word embedding or with a sentence embedding of the question to predict the answer. This work was successfully submitted to the Visual Question Answering Challenge 2016, where it achieved a 53,62\% of accuracy in the test dataset. The developed software has followed the best programming practices and Python code style, providing a consistent baseline in Keras for different configurations.
Scientist meets web dev: how Python became the language of dataGael Varoquaux
Python started as a scripting language, but now it is the new trend everywhere and in particular for data science, the latest rage of computing. It didn’t get there by chance: tools and concepts built by nerdy scientists and geek sysadmins provide foundations for what is said to be the sexiest job: data scientist.
In this talk I give a personal perspective on the progress of the scientific Python ecosystem, from numerical physics to data mining. What made Python suitable for science; Why the cultural gap between scientific Python and the broader Python community turned out to be a gold mine; And where this richness might lead us.
The talk will discuss low-level and high-level technical aspects, such as how the Python world makes it easy to move large chunks of number across code. It will touch upon current technical details that make scikit-learn and joblib stand.
Personal point of view on scikit-learn: past, present, and future.
This talks gives a bit of history, mentions exciting development, and a personal vision on the future.
Succeeding in academia despite doing good_softwareGael Varoquaux
Hacking academia for fun and profit
Thoughts on succeeding in academia despite doing good software
Keynote I gave at the Scipyconf Argentina 2014 conference
The advancement of science is a noble cause, and academia a fierce battlefield for tenure. Software is seen as a mere technicality, not worth a line on an academic CV. I claim that, on the opposite software, is the new medium of scientific method. I claim that succeeding in academia can be achieved not despite writing good software but via such an accomplishment. The key is to choose the right battles and to win them.
What is the emerging role of software in the scientific workflow? Which are the software challenges that can have impact? How to balance software quality assurance and the quick turn-around random-walk of research? What does "good design" mean for research software? What Python patterns can boost productivity and reuse in exploratory scientific computing?
I will try to answer these questions, based on my personal experience of growing up to become an academic Pythonista.
Data science calls for rapid experimentation and building intuitions from the data. Yet, data science also underpins crucial decisions and operational logic. Writing production-ready and robust statistical analysis without cognitive overhead may seem a conundrum. I will explore simple, and less simple, practices for fast turn around and consolidation of data-science code. I will discuss how these considerations led to the design of scikit-learn, that enables easy machine learning yet is used in production. Finally, I will mention some scikit-learn gems, new or forgotten.
Scikit-learn for easy machine learning: the vision, the tool, and the projectGael Varoquaux
Scikit-learn is a popular machine learning tool. What can it do for you?Why you you want to use it? What can you do with it? Where is it going?In this talk, I will discuss why and how scikit-learn became popular. Iwill argue that it is successful because of its vision: it fills an important slot in the rich ecosystem of data science. I will demonstrate how scikit-learn makes predictive analysis easy and yet versatile.I will shed some light on our development process: how do we, as a community, ensure the quality and the growth of scikit-learn?
Inter-site autism biomarkers from resting state fMRIGael Varoquaux
We present an automated pipeline to learn predictive biomarkers from resting-state fMRI. We apply it to classifying autism on unseen sites, demonstrating the feasibility of biomarkers on weakly standardized functional imaging data.
We study the steps of the pipeline that are important to predict and can show that 1) the choice of atlas is the most important choice. Ideally the atlas should be made of functional regions learned from the data. 2) "tangent space" parametrization of the connectivity is the best performer.
We conclude on general recommendations for predictive biomarkers from resting-state fMRI
Brain maps from machine learning? Spatial regularizationsGael Varoquaux
Pattern Recognition for NeuroImaging (PR4NI)
We will show empirically how the pattern recognition techniques-commonly used, such as SVMs, provide low-quality brain maps, eventhough they give very good prediction accuracy. We will give an overview of recently developed techniques to impose priors on patterns particularly well suited to neuroimaging: selecting a small number of spatially-structured predictive brain regions. These tools reconcile machine learning with
brain mapping by giving maps more useful to draw neuroscientific conclusions. In addition, they are more robust to cross-individuals spatial variability and thus generalize well across subjects.
Intelligent Machine On Cognitive Methodological DevelopmentNaga Balaji
Here I'm described about AI or Intelligent system development methodology, which obviously led to 'Strong AI'.Here, Strong AI can be achieved using Mind Uploading. This AI will leap of technologies, they help and develop world with high security.
Contact:nagabalajitg@gmail.com for any queries
Processing biggish data on commodity hardware: simple Python patternsGael Varoquaux
Scipy 2013 talk on simple Python patterns to process efficiently large datasets using Python.
The talk focuses on the patterns and the concepts rather than the implementations. The implementations can be found by looking at the joblib and scikit-learn codebase
Talk giving at PRNI 2016 for the paper https://arxiv.org/pdf/1606.06439v1.pdf
Abstract — Spatially-sparse predictors are good models for
brain decoding: they give accurate predictions and their weight
maps are interpretable as they focus on a small number of
regions. However, the state of the art, based on total variation or
graph-net, is computationally costly. Here we introduce sparsity
in the local neighborhood of each voxel with social-sparsity, a
structured shrinkage operator. We find that, on brain imaging
classification problems, social-sparsity performs almost as well as
total-variation models and better than graph-net, for a fraction
of the computational cost. It also very clearly outlines predictive
regions. We give details of the model and the algorithm
Deep neural networks have revolutionized the data analytics scene by improving results in several and diverse benchmarks with the same recipe: learning feature representations from data. These achievements have raised the interest across multiple scientific fields, especially in those where large amounts of data and computation are available. This change of paradigm in data analytics has several ethical and economic implications that are driving large investments, political debates and sounding press coverage under the generic label of artificial intelligence (AI). This talk will present the fundamentals of deep learning through the classic example of image classification, and point at how the same principal has been adopted for several tasks. Finally, some of the forthcoming potentials and risks for AI will be pointed.
Deep Learning has taken the digital world by storm. As a general purpose technology, it is now present in all walks of life. Although the fundamental developments in methodology have been slowing down in the past few years, applications are flourishing with major breakthroughs in Computer Vision, NLP and Biomedical Sciences. The primary successes can be attributed to the availability of large labelled data, powerful GPU servers and programming frameworks, and advances in neural architecture engineering. This combination enables rapid construction of large, efficient neural networks that scale to the real world. But the fundamental questions of unsupervised learning, deep reasoning, and rapid contextual adaptation remain unsolved. We shall call what we currently have Deep Learning 1.0, and the next possible breakthroughs as Deep Learning 2.0.
This is part 1 of the Tutorial delivered at IEEE SSCI 2020, Canberra, December 1st (Virtual).
https://mcv-m6-video.github.io/deepvideo-2020/
Self-supervised techniques define surrogate tasks to train machine learning algorithms without the need of human generated labels. This lecture reviews the state of the art in the field of computer vision, including the baseline techniques based on visual feature learning from ImageNet data.
http://imatge-upc.github.io/vqa-2016-cvprw/
This thesis studies methods to solve Visual Question-Answering (VQA) tasks with a Deep Learning framework.As a preliminary step, we explore Long Short-Term Memory (LSTM) networks used in Natural Language Processing (NLP) to tackle Question-Answering (text based). We then modify the previous model to accept an image as an input in addition to the question. For this purpose, we explore the VGG-16 and K-CNN convolutional neural networks to extract visual features from the image. These are merged with the word embedding or with a sentence embedding of the question to predict the answer. This work was successfully submitted to the Visual Question Answering Challenge 2016, where it achieved a 53,62\% of accuracy in the test dataset. The developed software has followed the best programming practices and Python code style, providing a consistent baseline in Keras for different configurations.
Scientist meets web dev: how Python became the language of dataGael Varoquaux
Python started as a scripting language, but now it is the new trend everywhere and in particular for data science, the latest rage of computing. It didn’t get there by chance: tools and concepts built by nerdy scientists and geek sysadmins provide foundations for what is said to be the sexiest job: data scientist.
In this talk I give a personal perspective on the progress of the scientific Python ecosystem, from numerical physics to data mining. What made Python suitable for science; Why the cultural gap between scientific Python and the broader Python community turned out to be a gold mine; And where this richness might lead us.
The talk will discuss low-level and high-level technical aspects, such as how the Python world makes it easy to move large chunks of number across code. It will touch upon current technical details that make scikit-learn and joblib stand.
Personal point of view on scikit-learn: past, present, and future.
This talks gives a bit of history, mentions exciting development, and a personal vision on the future.
Succeeding in academia despite doing good_softwareGael Varoquaux
Hacking academia for fun and profit
Thoughts on succeeding in academia despite doing good software
Keynote I gave at the Scipyconf Argentina 2014 conference
The advancement of science is a noble cause, and academia a fierce battlefield for tenure. Software is seen as a mere technicality, not worth a line on an academic CV. I claim that, on the opposite software, is the new medium of scientific method. I claim that succeeding in academia can be achieved not despite writing good software but via such an accomplishment. The key is to choose the right battles and to win them.
What is the emerging role of software in the scientific workflow? Which are the software challenges that can have impact? How to balance software quality assurance and the quick turn-around random-walk of research? What does "good design" mean for research software? What Python patterns can boost productivity and reuse in exploratory scientific computing?
I will try to answer these questions, based on my personal experience of growing up to become an academic Pythonista.
Data science calls for rapid experimentation and building intuitions from the data. Yet, data science also underpins crucial decisions and operational logic. Writing production-ready and robust statistical analysis without cognitive overhead may seem a conundrum. I will explore simple, and less simple, practices for fast turn around and consolidation of data-science code. I will discuss how these considerations led to the design of scikit-learn, that enables easy machine learning yet is used in production. Finally, I will mention some scikit-learn gems, new or forgotten.
Scikit-learn for easy machine learning: the vision, the tool, and the projectGael Varoquaux
Scikit-learn is a popular machine learning tool. What can it do for you?Why you you want to use it? What can you do with it? Where is it going?In this talk, I will discuss why and how scikit-learn became popular. Iwill argue that it is successful because of its vision: it fills an important slot in the rich ecosystem of data science. I will demonstrate how scikit-learn makes predictive analysis easy and yet versatile.I will shed some light on our development process: how do we, as a community, ensure the quality and the growth of scikit-learn?
Inter-site autism biomarkers from resting state fMRIGael Varoquaux
We present an automated pipeline to learn predictive biomarkers from resting-state fMRI. We apply it to classifying autism on unseen sites, demonstrating the feasibility of biomarkers on weakly standardized functional imaging data.
We study the steps of the pipeline that are important to predict and can show that 1) the choice of atlas is the most important choice. Ideally the atlas should be made of functional regions learned from the data. 2) "tangent space" parametrization of the connectivity is the best performer.
We conclude on general recommendations for predictive biomarkers from resting-state fMRI
Brain maps from machine learning? Spatial regularizationsGael Varoquaux
Pattern Recognition for NeuroImaging (PR4NI)
We will show empirically how the pattern recognition techniques-commonly used, such as SVMs, provide low-quality brain maps, eventhough they give very good prediction accuracy. We will give an overview of recently developed techniques to impose priors on patterns particularly well suited to neuroimaging: selecting a small number of spatially-structured predictive brain regions. These tools reconcile machine learning with
brain mapping by giving maps more useful to draw neuroscientific conclusions. In addition, they are more robust to cross-individuals spatial variability and thus generalize well across subjects.
Intelligent Machine On Cognitive Methodological DevelopmentNaga Balaji
Here I'm described about AI or Intelligent system development methodology, which obviously led to 'Strong AI'.Here, Strong AI can be achieved using Mind Uploading. This AI will leap of technologies, they help and develop world with high security.
Contact:nagabalajitg@gmail.com for any queries
Processing biggish data on commodity hardware: simple Python patternsGael Varoquaux
Scipy 2013 talk on simple Python patterns to process efficiently large datasets using Python.
The talk focuses on the patterns and the concepts rather than the implementations. The implementations can be found by looking at the joblib and scikit-learn codebase
Talk giving at PRNI 2016 for the paper https://arxiv.org/pdf/1606.06439v1.pdf
Abstract — Spatially-sparse predictors are good models for
brain decoding: they give accurate predictions and their weight
maps are interpretable as they focus on a small number of
regions. However, the state of the art, based on total variation or
graph-net, is computationally costly. Here we introduce sparsity
in the local neighborhood of each voxel with social-sparsity, a
structured shrinkage operator. We find that, on brain imaging
classification problems, social-sparsity performs almost as well as
total-variation models and better than graph-net, for a fraction
of the computational cost. It also very clearly outlines predictive
regions. We give details of the model and the algorithm
Connectomics: Parcellations and Network Analysis MethodsGael Varoquaux
Simple tutorial on methods for functional connectome analysis: learning regions, extracting functional signal, inferring the network structure, and comparing it across subjects.
Scikit learn: apprentissage statistique en PythonGael Varoquaux
Présentation au niveau sur "scikit-learn", un toolkit d'apprentissage statistique (machine learning) en Python.
Philosophie et strategie du projet, ainsi que API et très bref examples de code.
Atlases of cognition with large-scale human brain mappingGael Varoquaux
Cognitive neuroscience uses neuroimaging to identify brain systems engaged in specific cognitive tasks. However, linking unequivocally brain systems with cognitive functions is difficult: each task probes only a small number of facets of cognition, while brain systems are often engaged in many tasks. We develop a new approach to generate a functional atlas of cognition, demonstrating brain systems selectively associated with specific cognitive functions. This approach relies upon an ontology that defines specific cognitive functions and the relations between them, along with an analysis scheme tailored to this ontology. Using a database of thirty neuroimaging studies, we show that this approach provides a highly-specific atlas of mental functions, and that it can decode the mental processes engaged in new tasks.
Machine learning for functional connectomesGael Varoquaux
A tutorial on using machine-learning for functional-connectomes, for instance on resting-state fMRI. This is typically useful for population imaging: comparing traits or conditions across subjects.
Slides for my keynote at Scipy 2017
https://youtu.be/eVDDL6tgsv8
Computing has been driving forward a revolution in how science and technology can solve new problems. Python has grown to be a central player in this game, from computational physics to data science. I would like to explore some lessons learned doing science with Python as well as doing Python libraries for science. What are the ingredients that the scientists need? What technical and project-management choices drove the success of projects I've been involved with? How do these demands and offers shape our ecosystem?
In this talk, I'd like to share a few thoughts on how we code for science and innovation, with the modest goal of changing the world.
Brain reading, compressive sensing, fMRI and statistical learning in PythonGael Varoquaux
Talk given at Gipsa-lab on using machine learning to learn from fMRI brain patterns and regions related to behavior. This talks focuses on the signal and inverse-problem aspects of the equation, as well as on the software.
Measuring mental health with machine learning and brain imagingGael Varoquaux
The study of mental health relies vastly on behavior testing and questionnaires. I discuss how
machine learning on large brain-imaging cohorts can open new alleys for markers of mental health. My
claims are that challenges are the amount of diagnosed conditions rather than heterogeneity of the
conditions and that we should turn to proxy labels. I discuss another fundamental challenge to this
agenda: the external and construct validity of brain-imaging based markers.
https://imatge.upc.edu/web/publications/exploring-eeg-object-detection-and-retrieval
This paper explores the potential for using Brain Computer Interfaces (BCI) as a relevance feedback mechanism in content-based image retrieval. We investigate if it is possible to capture useful EEG signals to detect if relevant objects are present in a dataset of realistic and complex images. We perform several experiments using a rapid serial visual presentation (RSVP) of images at different rates (5Hz and 10Hz) on 8 users with different degrees of familiarization with BCI and the dataset. We then use the feedback from the BCI and mouse-based interfaces to retrieve objects in a subset of TRECVid images. We show that it is indeed possible detect such objects in complex images and, also, that users with previous knowledge on the dataset or experience with the RSVP outperform others. When the users have limited time to annotate the images (100 seconds in our experiments) both interfaces are comparable in performance. Comparing our best users in a retrieval task, we found that EEG-based relevance feedback outperforms mouse-based feedback. The realistic and complex image dataset differentiates our work from previous studies on EEG for image retrieval.
Estimating Functional Connectomes: Sparsity’s Strength and LimitationsGael Varoquaux
Talk given at the OHBM 2017 education course.
I present the challenges and techniques to estimating meaningful brain functional connectomes from fMRI: why sparsity in inverse covariance leads to models that can interpreted as interactions between regions.
Then I discuss the limitations of sparse estimators and introduce shrinkage as an alternative. Finally, I discuss how to compare multiple functional connectomes.
Individual Brain Charting, a high-resolution fMRI dataset for cognitive mappi...Ana Luísa Pinho
Linking brain systems and mental functions requires accurate descriptions of behavioral tasks and fine demarcations of brain regions. Functional Magnetic Resonance Imaging (fMRI) has contributed to the investigation of brain regions involved in a variety of cognitive processes. However, to date, no data collection has systematically addressed the functional mapping of cognitive mechanisms at a fine spatial scale. The Individual Brain Charting (IBC) project stands for a high-resolution multi-task fMRI dataset that intends to provide the objective basis toward a comprehensive functional atlas of the human brain. The data refer to a permanent cohort performing many different tasks. The large amount of task-fMRI data on the same subjects yields a precise mapping of the underlying functions, free from both inter-subject and inter-site variability. The first release of the IBC dataset consists of data acquired from thirteen participants during performance of a dozen of tasks. Raw data from this release are publicly available in the OpenNeuro repository and derived statistical maps can be found in NeuroVault. These maps reveal a successful cognitive encoding of many psychological domains in large areas of the human brain. Indeed, main findings of the original studies were replicated at higher resolution. Our results thus provide a comprehensive revision of the neural correlates underlying behavior, highlighting nonetheless the spatial variability of functional signatures between participants. In addition, this dataset supports investigations using alternative approaches to group-level analysis of task-specific studies. For instance, such rich task-wise dataset can be applied to mega-analytic encoding models towards the development of a brain-atlasing framework, by systematically mapping functional signatures associated with the cognitive components of the tasks.
Evaluating machine learning models and their diagnostic valueGael Varoquaux
Model evaluation is, in my opinion, the most overlooked step of the machine-learning pipeline. Reliably estimating a model's performance for a given purpose is crucial and difficult. In this talk, I first discuss choosing metric informative for the application, stressing the importance of the class prevalence in classification settings. I will then discussing procedures to estimate the generalization performance, drawing a distinction between evaluating a learning procedure or a prediction rule, and discussing how to give confidence intervals to the performance estimates.
A tutorial on machine learning to build prediction models with missing values.
The slides cover both theoretical results (statistical learning) and practical advice, with a focus on implementation in Python with scikit-learn
Dirty data science machine learning on non-curated dataGael Varoquaux
These slides are a one-hour course on machine learning with non-curated data.
According to industry surveys, the number one hassle of data scientists is cleaning the data to analyze it. Here, I survey what "dirtyness" forces time-consuming cleaning. We will then cover two specific aspects of dirty data: non-normalized entries and missing values. I show how, for these two problems, machine-learning practice can be adapted to work directly on a data table without curation. The normalization problem can be tackled by adapting methods from natural language processing. The missing-values problem will lead us to revisit classic statistical results in the setting of supervised learning.
Representation learning in limited-data settingsGael Varoquaux
A 4-hour long didactic course on simple notions of representations and how to use them in limited-data settings:
- A supervised learning point of view, giving intuitions and math on what are representations are why they matter
- Building simple unsupervised learning models to extract representation: from matrix decomposition for signals to embeddings of entities
- Evaluating models in limited-data settings, often a bottleneck
This slide-deck was given as a course at the 2021 DeepLearn summer school.
Similarity encoding for learning on dirty categorical variablesGael Varoquaux
For statistical learning, categorical variables in a table are usually considered as discrete entities and encoded separately to feature vectors, e.g., with one-hot encoding. "Dirty" non-curated data gives rise to categorical variables with a very high cardinality but redundancy: several categories reflect the same entity. In databases, this issue is typically solved with a deduplication step. We show that a simple approach that exposes the redundancy to the learning algorithm brings significant gains. We study a generalization of one-hot encoding, similarity encoding, that builds feature vectors from similarities across categories. We perform a thorough empirical validation on non-curated tables, a problem seldom studied in machine learning. Results on seven real-world datasets show that similarity encoding brings significant gains in prediction in comparison with known encoding methods for categories or strings, notably one-hot encoding and bag of character n-grams. We draw practical recommendations for encoding dirty categories: 3-gram similarity appears to be a good choice to capture morphological resemblance. For very high-cardinality, dimensionality reduction significantly reduces the computational cost with little loss in performance: random projections or choosing a subset of prototype categories still outperforms classic encoding approaches.
Simple representations for learning: factorizations and similarities Gael Varoquaux
Real-life data seldom comes in the ideal form for statistical learning.
This talk focuses on high-dimensional problems for signals and
discrete entities: when dealing with many, correlated, signals or
entities, it is useful to extract representations that capture these
correlations.
Matrix factorization models provide simple but powerful representations. They are used for recommender systems across discrete entities such as users and products, or to learn good dictionaries to represent images. However they entail large computing costs on very high-dimensional data, databases with many products or high-resolution images. I will present an
algorithm to factorize huge matrices based on stochastic subsampling that gives up to 10-fold speed-ups [1].
With discrete entities, the explosion of dimensionality may be due to variations in how a smaller number of categories are represented. Such a problem of "dirty categories" is typical of uncurated data sources. I will discuss how encoding this data based on similarities recovers a useful category structure with no preprocessing. I will show how it interpolates between one-hot encoding and techniques used in character-level natural language processing.
[1] Stochastic subsampling for factorizing huge matrices, A Mensch, J Mairal, B Thirion, G Varoquaux, IEEE Transactions on Signal Processing 66 (1), 113-128
[2] Similarity encoding for learning with dirty categorical variables. P Cerda, G Varoquaux, B Kégl Machine Learning (2018): 1-18
A tutorial on Machine Learning, with illustrations for MR imagingGael Varoquaux
Machine learning builds predictive models from the data. It is massive used on medical images these days, for a variety of applications ranging from segmentation to diagnosis.
This is an introductory tutorial to machine learning from giving intuitions on the statistical point of view. It introduce the methodology, the concepts behind the central models, the validation framework and some caveats to look for.
It also discusses some applications to drawing conclusions from brain imaging, and use these applications to highlight various technical aspects to running machine learning models on high-dimensional data such as medical imaging.
Computational practices for reproducible scienceGael Varoquaux
Reconciling bleeding-edge scientific results and reproducible research may seem a conundrum in our fast-paced high-pressure academic world. I discuss the practices that I found useful in computational work. At a high level, it is important to navigate the space between rapid experimentation and industrial-grade software development. I advocate adopting more and more software-engineering best practices as a project matures. I will also discuss how to turn the computational work into libraries, and to ensure the quality of the resulting libraries. And I conclude on how those libraries need to fit in the larger picture of the exercise of research to give better science.
Building a cutting-edge data processing environment on a budgetGael Varoquaux
As a penniless academic I wanted to do "big data" for science. Open source, Python, and simple patterns were the way forward. Staying on top of todays growing datasets is an arm race. Data analytics machinery —clusters, NOSQL, visualization, Hadoop, machine learning, ...— can spread a team's resources thin. Focusing on simple patterns, lightweight technologies, and a good understanding of the applications gets us most of the way for a fraction of the cost.
I will present a personal perspective on ten years of scientific data processing with Python. What are the emerging patterns in data processing? How can modern data-mining ideas be used without a big engineering team? What constraints and design trade-offs govern software projects like scikit-learn, Mayavi, or joblib? How can we make the most out of distributed hardware with simple framework-less code?
Scikit-learn: apprentissage statistique en Python. Créer des machines intelli...Gael Varoquaux
High-level talk about machine learning: the statistical and computational challenges, as well as how they can be answer by the scikit-learn Python toolkit. In French
The Importance of Community Nursing Care.pdfAD Healthcare
NDIS and Community 24/7 Nursing Care is a specific type of support that may be provided under the NDIS for individuals with complex medical needs who require ongoing nursing care in a community setting, such as their home or a supported accommodation facility.
Medical Technology Tackles New Health Care Demand - Research Report - March 2...pchutichetpong
M Capital Group (“MCG”) predicts that with, against, despite, and even without the global pandemic, the medical technology (MedTech) industry shows signs of continuous healthy growth, driven by smaller, faster, and cheaper devices, growing demand for home-based applications, technological innovation, strategic acquisitions, investments, and SPAC listings. MCG predicts that this should reflects itself in annual growth of over 6%, well beyond 2028.
According to Chris Mouchabhani, Managing Partner at M Capital Group, “Despite all economic scenarios that one may consider, beyond overall economic shocks, medical technology should remain one of the most promising and robust sectors over the short to medium term and well beyond 2028.”
There is a movement towards home-based care for the elderly, next generation scanning and MRI devices, wearable technology, artificial intelligence incorporation, and online connectivity. Experts also see a focus on predictive, preventive, personalized, participatory, and precision medicine, with rising levels of integration of home care and technological innovation.
The average cost of treatment has been rising across the board, creating additional financial burdens to governments, healthcare providers and insurance companies. According to MCG, cost-per-inpatient-stay in the United States alone rose on average annually by over 13% between 2014 to 2021, leading MedTech to focus research efforts on optimized medical equipment at lower price points, whilst emphasizing portability and ease of use. Namely, 46% of the 1,008 medical technology companies in the 2021 MedTech Innovator (“MTI”) database are focusing on prevention, wellness, detection, or diagnosis, signaling a clear push for preventive care to also tackle costs.
In addition, there has also been a lasting impact on consumer and medical demand for home care, supported by the pandemic. Lockdowns, closure of care facilities, and healthcare systems subjected to capacity pressure, accelerated demand away from traditional inpatient care. Now, outpatient care solutions are driving industry production, with nearly 70% of recent diagnostics start-up companies producing products in areas such as ambulatory clinics, at-home care, and self-administered diagnostics.
The dimensions of healthcare quality refer to various attributes or aspects that define the standard of healthcare services. These dimensions are used to evaluate, measure, and improve the quality of care provided to patients. A comprehensive understanding of these dimensions ensures that healthcare systems can address various aspects of patient care effectively and holistically. Dimensions of Healthcare Quality and Performance of care include the following; Appropriateness, Availability, Competence, Continuity, Effectiveness, Efficiency, Efficacy, Prevention, Respect and Care, Safety as well as Timeliness.
This document is designed as an introductory to medical students,nursing students,midwives or other healthcare trainees to improve their understanding about how health system in Sri Lanka cares children health.
COVID-19 PCR tests remain a critical component of safe and responsible travel in 2024. They ensure compliance with international travel regulations, help detect and control the spread of new variants, protect vulnerable populations, and provide peace of mind. As we continue to navigate the complexities of global travel during the pandemic, PCR testing stands as a key measure to keep everyone safe and healthy. Whether you are planning a business trip, a family vacation, or an international adventure, incorporating PCR testing into your travel plans is a prudent and necessary step. Visit us at https://www.globaltravelclinics.com/
Under Pressure : Kenneth Kruk's StrategyKenneth Kruk
Kenneth Kruk's story of transforming challenges into opportunities by leading successful medical record transitions and bridging scientific knowledge gaps during COVID-19.
PET CT beginners Guide covers some of the underrepresented topics in PET CTMiadAlsulami
This lecture briefly covers some of the underrepresented topics in Molecular imaging with cases , such as:
- Primary pleural tumors and pleural metastases.
- Distinguishing between MPM and Talc Pleurodesis.
- Urological tumors.
- The role of FDG PET in NET.
ICH Guidelines for Pharmacovigilance.pdfNEHA GUPTA
The "ICH Guidelines for Pharmacovigilance" PDF provides a comprehensive overview of the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) guidelines related to pharmacovigilance. These guidelines aim to ensure that drugs are safe and effective for patients by monitoring and assessing adverse effects, ensuring proper reporting systems, and improving risk management practices. The document is essential for professionals in the pharmaceutical industry, regulatory authorities, and healthcare providers, offering detailed procedures and standards for pharmacovigilance activities to enhance drug safety and protect public health.
Trauma Outpatient Center is a comprehensive facility dedicated to addressing mental health challenges and providing medication-assisted treatment. We offer a diverse range of services aimed at assisting individuals in overcoming addiction, mental health disorders, and related obstacles. Our team consists of seasoned professionals who are both experienced and compassionate, committed to delivering the highest standard of care to our clients. By utilizing evidence-based treatment methods, we strive to help our clients achieve their goals and lead healthier, more fulfilling lives.
Our mission is to provide a safe and supportive environment where our clients can receive the highest quality of care. We are dedicated to assisting our clients in reaching their objectives and improving their overall well-being. We prioritize our clients' needs and individualize treatment plans to ensure they receive tailored care. Our approach is rooted in evidence-based practices proven effective in treating addiction and mental health disorders.
Machine learning and cognitive neuroimaging: new tools can answer new questions
1. Machine learning and cognitive neuroimaging:
new tools can answer new questions
Gaël Varoquaux
How machine learning is shaping cognitive neuroimaging
[Varoquaux and Thirion 2014]
2. Cognitive neuroscience: linking psychology and
neuroscience (neural implementations)
Vision: A computational investigation into the human representation
and processing of visual information [Marr 1982]
G Varoquaux 2
3. Machine learning:
computational statistics
for prediction
(out-of-sample properties)
Paradigm shift
the dimensionality of
data grows,
enables richer models
Open-ended questions
⇒ large # features
From parameter
inference to prediction
x
y
G Varoquaux 3
4. Machine learning:
computational statistics
for prediction
(out-of-sample properties)
Paradigm shift
the dimensionality of
data grows,
enables richer models
Open-ended questions
⇒ large # features
From parameter
inference to prediction
x
y
Understanding, not predicting
Danger of solving the
wrong problem
Lost in formalization
G Varoquaux 3
6. Statistics Machine learning
Statistical machine learning
Hypothesis testing Prediction
T-test Tests on prediction Cross-validation
In sample Out of sample
G Varoquaux 4
7. Statistics Machine learning
Statistical machine learning
Hypothesis testing Prediction
T-test Tests on prediction Cross-validation
In sample Out of sample
Parametric Non-parametric
G Varoquaux 4
8. Statistics Machine learning
Statistical machine learning
Hypothesis testing Prediction
T-test Tests on prediction Cross-validation
In sample Out of sample
Parametric Non-parametric
Non-parametric tests Probabilistic modeling
Few parameters Many parameters
G Varoquaux 4
9. Statistics Machine learning
Statistical machine learning
Hypothesis testing Prediction
T-test Tests on prediction Cross-validation
In sample Out of sample
Parametric Non-parametric
Non-parametric tests Probabilistic modeling
Few parameters Many parameters
Univariate Multivariate
G Varoquaux 4
10. Statistics Machine learning
Statistical machine learning
Hypothesis testing Prediction
T-test Tests on prediction Cross-validation
In sample Out of sample
Parametric Non-parametric
Non-parametric tests Probabilistic modeling
Few parameters Many parameters
Univariate Multivariate
GLM = correlations Naive Bayes
Univariate selection
Differences mostly cultural: it’s a continuum
G Varoquaux 4
19. 1 Uncovering neural coding
Insights on breaking down cognitive functions into
atomic steps
[Hubel and Wiesel 1962]
Neurons receptive to
Gabors (edges)
G Varoquaux 8
20. 1 Uncovering neural coding
Insights on breaking down cognitive functions into
atomic steps
[Hubel and Wiesel 1962]
Neurons receptive to
Gabors (edges)
[Logothetis... 1995]
Shapes in inferior
temporal cortex
G Varoquaux 8
21. 1 Uncovering neural coding: richer models
Insights on breaking down cognitive functions into
atomic steps
[Hubel and Wiesel 1962]
Neurons receptive to
Gabors (edges)
[Logothetis... 1995]
Shapes in inferior
temporal cortex
Machine learning:
computer-vision models mapped to brain activity
[Yamins... 2014]
G Varoquaux 8
23. Machine learning for encoding models
Richer models of encoding
capture fine descriptions of behavior / stimuli
Require to forgo the contrast methodolgy
Is this a good or a bad thing?
G Varoquaux 10
24. 1 Models of the visual system
Image
V1
cortex
V2
cortex
Inferior
temporal
cortex
Fusiform
face area
Jack?
Is there a “face” region? A “foot” region? A “left big toe” region?
G Varoquaux 11
25. 1 Uncovering neural coding: cognitive oppositions
Is there a “face” region? A “foot” region? A “left big toe” region?
vs
G Varoquaux 12
26. 1 Uncovering neural coding: cognitive oppositions
Is there a “face” region? A “foot” region? A “left big toe” region?
vs
G Varoquaux 12
27. 1 Uncovering neural coding: cognitive oppositions
Is there a “face” region? A “foot” region? A “left big toe” region?
vs
-
G Varoquaux 12
28. 1 Uncovering neural coding: cognitive oppositions
Is there a “face” region? A “foot” region? A “left big toe” region?
vs
-Mapping relies on cognitive subtraction
Bound to mental process decomposition
G Varoquaux 12
29. 1 Decomposing visual stimuli
Low-level visual cortex is tuned
to natural image statistics
[Olshausen et al. 1996]
What drives high-level representations?
G Varoquaux 13
30. 1 Decomposing visual stimuli
Low-level visual cortex is tuned
to natural image statistics
[Olshausen et al. 1996]
What drives high-level representations?
Convolutional Net
G Varoquaux 13
33. 2 Increased sensitivity
“Given the goal of detecting the presence of a particular
mental representation in the brain, the primary advantage
of MVPA methods over individual-voxel-based methods is
increased sensitivity.” — [Norman... 2006]
G Varoquaux 16
34. 2 Increased sensitivity
An omnibus test
“Given the goal of detecting the presence of a particular
mental representation in the brain, the primary advantage
of MVPA methods over individual-voxel-based methods is
increased sensitivity.” — [Norman... 2006]
Is there “information” about a
stimuli in a given region?
G Varoquaux 16
35. 2 Increased sensitivity
An omnibus test
“Given the goal of detecting the presence of a particular
mental representation in the brain, the primary advantage
of MVPA methods over individual-voxel-based methods is
increased sensitivity.” — [Norman... 2006]
“However, these maps are not guaranteed to include all
the voxels that are involved in representing the categories
of interest.” — [Norman... 2006]
G Varoquaux 16
37. 2 Generalization as a test: cross-validation
x
y
x
y
High-dimensional models
G Varoquaux 18
38. 2 Generalization as a test: cross-validation
x
y
x
y
High-dimensional models
⇒ Important to test on independent data,
to control for model complexity
G Varoquaux 18
39. 2 Generalization as a test: cross-validation
High-dimensional models
⇒ Important to test on independent data,
to control for model complexity
40% 20% 10% 0% +10% +20% +40%
Leave one
sample out
Leave one
subject/session
20% leftout,
3 splits
20% leftout,
10 splits
20% leftout,
50 splits
22% +19%
+3% +43%
10% +10%
21% +17%
11% +11%
24% +16%
9% +9%
24% +14%
9% +8%
23% +13%
Intra
subject
Inter
subject
No silver bullet Poster 3829, Oral Th 12:45
G Varoquaux 18
40. 2 Behavioral predictions as a test
Increase “cognitive resolution”
One voxel’s information is not enough to distinguish
many cognitive states
⇒ analysis combining info across voxels
G Varoquaux 19
41. 2 Behavioral predictions as a test
Increase “cognitive resolution”
One voxel’s information is not enough to distinguish
many cognitive states
⇒ analysis combining info across voxels
Interpreting overlapping activations
Psychology not interested in where a task is
creating activation,
but if two tasks are creating activations in same areas
G Varoquaux 19
42. 2 Inference in cognitive neuroimaging
What is the neural support of a function?
What is function of a given brain module?
G Varoquaux 20
43. 2 Inference in cognitive neuroimaging
What is the neural support of a function?
What is function of a given brain module?
Brain mapping = task-evoked activity
G Varoquaux 20
44. 2 Inference in cognitive neuroimaging
[Poldrack 2006, Henson 2006]
What is the neural support of a function?
What is function of a given brain module?
Reverse inference
Brain mapping = task-evoked activity
+ crafting “contrasts” to isolate effects
G Varoquaux 20
45. 2 Inference in cognitive neuroimaging
[Kanwisher... 1997, Gauthier... 2000, Hanson and Halchenko 2008]
What is the neural support of a function?
What is function of a given brain module?
Reverse inference
Is there a face area?
G Varoquaux 20
46. 2 Inference in cognitive neuroimaging
[Poldrack... 2009, Schwartz... 2013]
What is the neural support of a function?
What is function of a given brain module?
Reverse inference
Decoding: Find regions that
predict observed cognition
G Varoquaux 20
47. 2 Decoding for reverse inference
[Poldrack... 2009, Schwartz... 2013]
Prediction = proxy for implication
Need large cognitive coverage
G Varoquaux 21
48. 2 Decoding for reverse inference
[Poldrack... 2009, Schwartz... 2013]
Prediction = proxy for implication
Need large cognitive coverage
Interpretation of the “grandmother neuron”
“more than a neuron re-
sponds to one concept and
[...] neurons do not neces-
sarily respond to only one
concept are given by the
data itself
[Quian Quiroga and Kreiman 2010]
G Varoquaux 21
49. 2 Brain decoding with linear models
Design
matrix
× Coefficients =
Coefficients are
brain maps
Target
G Varoquaux 22
50. 2 Brain decoding to recover predictive regions?
Face vs house visual recognition [Haxby... 2001]
SVM
error: 26%
G Varoquaux 23
51. 2 Brain decoding to recover predictive regions?
Face vs house visual recognition [Haxby... 2001]
Sparse model
error: 19%
G Varoquaux 23
52. 2 Brain decoding to recover predictive regions?
Face vs house visual recognition [Haxby... 2001]
Ridge
error: 15%
Best predictor outlines the worst regions
Best maps predict worst
G Varoquaux 23
53. 2 Decoders as estimators [Gramfort... 2013]
Inverse problem
Minimize the error term:
ˆw = argmin
w
l(y − X w)
Ill-posed:
Many different w will give
the same prediction error
Choice driven by (implicit) priors of the decoder
SVM sparse ridge TV- 1
G Varoquaux 24
54. 2 Decoders as estimators [Gramfort... 2013]
Inverse problem
Minimize the error term:
ˆw = argmin
w
l(y − X w)
Ill-posed:
Many different w will give
the same prediction error
Choice driven by (implicit) priors of the decoder
SVM sparse ridge TV- 1
Inferences rely, explicitely or implicitely,
on the regions estimated by the decoder
G Varoquaux 24
56. @GaelVaroquaux
Machine learning for cognitive neuroimaging
The description of cognition is hard ⇒ Encoding
Rich models depend less on paradigms
57. @GaelVaroquaux
Machine learning for cognitive neuroimaging
The description of cognition is hard ⇒ Encoding
Decoding as an omnibus test
For rich encoding models
To interpret overlaping activation
Cross-validation error bars
58. @GaelVaroquaux
Machine learning for cognitive neuroimaging
The description of cognition is hard ⇒ Encoding
Decoding as an omnibus test
Decoding for reverse inference
Requires large cognitive coverage
59. @GaelVaroquaux
Machine learning for cognitive neuroimaging
The description of cognition is hard ⇒ Encoding
Decoding as an omnibus test
Decoding for reverse inference
Estimation of predictive regions is difficult
Infinite number of maps predict as well
60. @GaelVaroquaux
Machine learning for cognitive neuroimaging
The description of cognition is hard ⇒ Encoding
Decoding as an omnibus test
Decoding for reverse inference
Estimation of predictive regions is difficult
Software: nilearn
In Python
http://nilearn.github.io
ni
[Varoquaux and Thirion 2014]
How machine learning is
shaping cognitive neuroimaging
61. References I
I. Gauthier, M. J. Tarr, J. Moylan, P. Skudlarski, J. C. Gore, and
A. W. Anderson. The fusiform “face area” is part of a network
that processes faces at the individual level. J cognitive
neuroscience, 12:495, 2000.
A. Gramfort, B. Thirion, and G. Varoquaux. Identifying predictive
regions from fMRI with TV-L1 prior. In PRNI, page 17, 2013.
U. Güçlü and M. A. van Gerven. Deep neural networks reveal a
gradient in the complexity of neural representations across the
ventral stream. The Journal of Neuroscience, 35(27):
10005–10014, 2015.
S. J. Hanson and Y. O. Halchenko. Brain reading using full brain
support vector machines for object recognition: there is no
“face” identification area. Neural Computation, 20:486, 2008.
B. Harvey, B. Klein, N. Petridou, and S. Dumoulin. Topographic
representation of numerosity in the human parietal cortex.
Science, 341(6150):1123–1126, 2013.
62. References II
J. V. Haxby, I. M. Gobbini, M. L. Furey, ... Distributed and
overlapping representations of faces and objects in ventral
temporal cortex. Science, 293:2425, 2001.
R. Henson. Forward inference using functional neuroimaging:
Dissociations versus associations. Trends in cognitive sciences,
10:64, 2006.
D. H. Hubel and T. N. Wiesel. Receptive fields, binocular
interaction and functional architecture in the cat’s visual cortex.
The Journal of physiology, 160:106, 1962.
N. Kanwisher, J. McDermott, and M. M. Chun. The fusiform face
area: a module in human extrastriate cortex specialized for face
perception. J Neuroscience, 17:4302, 1997.
K. N. Kay, T. Naselaris, R. J. Prenger, and J. L. Gallant.
Identifying natural images from human brain activity. Nature,
452:352, 2008.
63. References III
S.-M. Khaligh-Razavi and N. Kriegeskorte. Deep supervised, but
not unsupervised, models may explain it cortical representation.
PLoS Comput Biol, 10(11):e1003915, 2014.
N. K. Logothetis, J. Pauls, and T. Poggio. Shape representation in
the inferior temporal cortex of monkeys. Current Biology, 5:552,
1995.
D. Marr. Vision: A computational investigation into the human
representation and processing of visual information. The MIT
press, Cambridge, 1982.
T. M. Mitchell, S. V. Shinkareva, A. Carlson, K.-M. Chang, V. L.
Malave, R. A. Mason, and M. A. Just. Predicting human brain
activity associated with the meanings of nouns. science, 320:
1191, 2008.
T. Naselaris, K. N. Kay, S. Nishimoto, and J. L. Gallant. Encoding
and decoding in fMRI. Neuroimage, 56:400, 2011.
64. References IV
K. A. Norman, S. M. Polyn, G. J. Detre, and J. V. Haxby. Beyond
mind-reading: multi-voxel pattern analysis of fmri data. Trends
in cognitive sciences, 10:424, 2006.
J. P. O’Doherty, A. Hampton, and H. Kim. Model-based fMRI and
its application to reward learning and decision making. Annals of
the New York Academy of Sciences, 1104:35, 2007.
B. Olshausen ... Emergence of simple-cell remainsceptive field
properties by learning a sparse code for natural images. Nature,
381:607, 1996.
R. Poldrack. Can cognitive processes be inferred from
neuroimaging data? Trends in cognitive sciences, 10:59, 2006.
R. A. Poldrack, Y. O. Halchenko, and S. J. Hanson. Decoding the
large-scale structure of brain function by classifying mental
states across individuals. Psychological Science, 20:1364, 2009.
65. References V
R. Quian Quiroga and G. Kreiman. Postscript: About grandmother
cells and jennifer aniston neurons. Psychological Review, 117:
297, 2010.
Y. Schwartz, B. Thirion, and G. Varoquaux. Mapping cognitive
ontologies to and from the brain. In NIPS, 2013.
G. Varoquaux and B. Thirion. How machine learning is shaping
cognitive neuroimaging. GigaScience, 3:28, 2014.
D. L. Yamins, H. Hong, C. F. Cadieu, E. A. Solomon, D. Seibert,
and J. J. DiCarlo. Performance-optimized hierarchical models
predict neural responses in higher visual cortex. Proc Natl Acad
Sci, page 201403112, 2014.