An introduction to the Courtois NeuroMod project - intensive brain scanning of six participants (fMRI, MEG) to help train artificial neural networks. Focus on the first data release cneuromod-2020
Physiochemical properties of nanomaterials and its nanotoxicity.pptx
Bellec cornell 2021
1. CNeuroMod -
Augmenting learning in
artificial networks using
human brain activity and
behavior
Pierre Bellec
Département de Psychologie
pierre.bellec@criugm.qc.ca
3. Brain encoding and decoding
One way to test the consistency of representations in artificial neural networks (ANNs) and the brain is to encode
brain activity based on ANN presented with similar stimuli, or decode brain activity by predicting the expected ANN
activity and corresponding annotation. Figure from Schrimpf et al. Biorxiv 2020 reused under CC-BY license.
4. End-to-end brain encoding
Using massive fMRI
recordings during video
watching (N=1, 23 hours, Dr
Who), a convolutional vision
model was trained to
encode brain activity in
different brain areas from
frames of the movie, and
recovered non-trivial filters
at different levels of the
network.
Figure from Seeliger et al. PLOS
Computational Biology 2021.
reused under CC-BY license.
5. Videogame play models
Neuron (2021)
● Participants played pong, enduro
and space invaders in the
scanner.
● A DQN was trained to play those
games using reinforcement
learning.
● Successful brain encoding across
participants using the activity of
the DQN.
6. The Courtois project on Neuronal Modelling
● Aim: train artificial neural networks
using extensive experimental data on
individual human brain activity and
behaviour.
● 42 researchers
○ Platform
○ Vision
○ Memory
○ Emotions
○ Language
○ Audition
○ Video games
Naselaris et al. 2021. Figure under CC-BY
(Biorxiv)
● Audition
● MRI
● Modelling
https://docs.cneuromod.ca/en/latest/AUTHORS.html
7. CNeuroMod: extreme scanning
Slide from Mrs Julie A Boyle. More info: CNeuromod Release Part 1 video.
Participants (n = 6)
Inclusion criteria:
1) Generally healthy
2) MRI & MEG compatible
3) Have a normal hearing for
their age
4) Solid comprehension of
English language
5) Be willing to be intensively
scanned for at least 5
years!
Participant ID Sex Age at recruitment Handedness* Maternal language*
Sub-01 m 41 right french
Sub-02 m 47 right french
Sub-03 f 39 right bilingual
Sub-04 f 31 right french
Sub-05 m 46 right english
Sub-06 f 37 right english
Target: 500 hours of functional
imaging per participant
9. CNeuroMod: MRI protocol
Slide from Mrs Julie A Boyle. More info: CNeuromod Release Part 1 video.
Functional sequence (3h/week) Anatomical sequences
Siemens 3T Prisma fit
64-channel head/neck coil
CMRR sequence developed for HCP project
(Xu et al., 2013)
● Accelerated simultaneous multi-slice,
gradient echo-planar sequence:
○ slice acceleration factor = 4
○ TR = 1.49 s
○ TE = 37 ms
○ flip angle = 52 degrees
○ voxel size = 2mm3
○ 60 slices per volume
○ acquisition matrix 96x96
● Acquired 4 times per year
● brain & cervical
sequences
Scanner
https://docs.cneuromod.ca/en/latest/MRI.html#image-acquisition
10. movie10 (10h 16min) hcptrt - (8h 25min)
Each subject did 14 to 15 x 7 HCP
tasks, as well as 5 resting states.
The tasks span multiple cognitive
functions (language, social
cognition, etc...) and stimuli
modality (e.g., auditory stories,
visual prompts).
The order of fMRI tasks was
randomized, the order of the rsfMRI
counterbalanced (i.e., odd vs even
sessions).
Glasser et al., 2016, Nat Neurosci
Barch et al., 2013, Neuroimage
friend s1 & s2 (18h 57min) 2 anatomical sessions
*
*
CNeuroMod 2020 release
11. CNeuroMod 2020 release
Websites: cneuromod.ca/
Resistance from local
authorities on the idea of
openly sharing data
Working with external
lawyers with expertise in
data sharing
Hoping to finalize
documents by late
spring 2021... Stay
tuned!
When? How?
Registered access
a) PI with university credentials
b) Short blurb about research
c) Sign data transfer agreement
Data format
13. Barch et al., 2013, Neuroimage
Move Right Hand, Left Hand, Right Foot, Left Foot, or Tongue
Example Single Subject
z-map of 1 session
Group Average
CNeuroMod 2020: activation maps (hcptrt)
Figure by Dr Valentina Borghesani. See the cneuromod data release talk part I for more info.
14. CNeuroMod 2020: activation maps (hcptrt)
Figure by Dr Valentina Borghesani. See the cneuromod data release talk part I for more info.
Pinho et al., 2018, Scientific data
Conditions similarity (avg. subjects)
Language
Social
Gambling
Right Hand
Left Hand
Right Foot
Left Foot
Tongue
Relational
Emotion
Body
Place
Face
Tool
#1 #2 #3 #4 #5 #6
Similarity across subjects, conditions, & sessions
Similarity across subjects & conditions (av. sessions)
15. Mean across our 6 sub
Mean across 115 sub*
* from the nilearn
fmri-development
dataset
CNeuroMod 2020: resting-state connectivity
Figure by Mr François Paugam. See the cneuromod data release talk part II for more info.
16. CNeuroMod 2020: resting-state connectivity
Figure by Mr François Paugam. See the cneuromod data release talk part II for more info.
17. Figure adapted from
Nastase et al., 2021
Average ISC across all movie10 movies, arbitrarily
thresholded at 0.10
CNeuroMod 2020: inter-subject correlations
Figure by Mrs Elizabeth Dupré. See the cneuromod data release talk part II for more info.
19. Auditory brain encoding
Overview of the brain encoding framework using SoundNet on audio stimuli in cneuromod 2020. More info: Mrs Fréteault’s talk.
20. Auditory brain encoding (friends)
Fine-tuning of SoundNet for encoding brain activity in the auditory cortex, at the individual level. Early stopping can lead to
large gains in max R2 compared to a frozen soundnet (max R2<0.2). Clear evidence of overfitting suggest the need for
additional training data and/or auxiliary tasks such as audio labelling. More info: Mrs Fréteault’s talk.
Audio ROI with 556 voxels
Max R2 in sub-03
Epochs #
Training Validation
21. Video Gameplay: MRI & MEG controller
More info: talk by Mr Harel, Mr Cyr and Dr Pinsard.
3D modelling
Design iterations
Optical switch
Handheld enclosure
V1 V5
V1 V3
22. Video Gameplay: OpenAI gym retro
More info: talk by Mr Harel, Mr Cyr and Dr Pinsard.
● Video games emulation tools for
reinforcement learning research
● Allows emulation of 9 different
consoles
● Integration of ~1 000 games (and
possibility to integrate more)
● Access to memory states, tools to
map memory addresses to relevant
variables
23. - Action scrolling platformer
- Released in 1993 on SEGA
MegaDrive
- Goal : reach the end of the level and
kill bosses
hit
jump
move up
move down
move left
move right
power up (boo)
Video Gameplay:
Shinobi3 RTNM
More info: talk by Mr Harel, Mr Cyr and Dr Pinsard.
24. shinobi3: protocol
● 3 different levels, with shared game mechanisms but also
specific actions to master.
● Records of each level repetition contains an initial state and
key-presses -> full re-emulation of the gameplay with OpenAI
gym-retro: an AI emulation and training software platform.
○ Video frames
○ Sound chunks
○ RAM: variables of interest
● 2 datasets with ongoing acquisition: cneuromod-2021
Behavioral dataset:
Training at home
To analyse/model the learning curve, and gameplay styles.
fMRI dataset:
● In each fMRI run, the task loops and repeats the 3 levels.
● ~10h of recordings per subject
More info: talk by Mr Harel, Mr Cyr and Dr Pinsard.
25. Video Gameplay: Behavioral performance
Training outside the scanner Performance in the scanner
Performance in shinobi 3 slowly increased to reach a plateau outside of the scanner. Performance in the scanner matches the
plateau outside the scanner. Figure by Mr. Yann Harel. More info: talk by Mr Harel, Mr Cyr and Dr Pinsard.
26. Video Gameplay: brain activation
Jump :
The openAI gym-retro API can be used to annotate actions
throughout the game and generate brain activation maps.
Figure by Mr. Yann Harel. More info: talk by Mr Harel, Mr Cyr
and Dr Pinsard.
27. Video Gameplay:
brain activation
Figure by Mr. Yann Harel. More info:
talk by Mr Harel, Mr Cyr and Dr Pinsard.
Activation maps have
low-to-moderate consistency
between sessions (sub-01)
Hit and Jump showed the highest
reliability
28. Video Gameplay: (human) supervised learning
An artificial neural
network with vision +
memory layers was
trained to predict the
actions of the player,
treating behavioral
imitation as a supervised
learning task.
More info: talk by Mr.
Kemtur.
29. Video Gameplay: (human) supervised learning
sub-01 gameplay ANN-01 gameplay
This approach is able to quantitatively match individual human actions with high accuracy, and qualitatively leads to
compelling imitation of individual playing style. More info: talk by Mr. Kemtur.
32. Video Gameplay: neuroevolution adversarial imitation
player
agent
discriminator
agent
subject video ? subject video ?
+ fitness
- fitness
agent video subject video
36. 1. The cneuromod2020 data is of appropriate quality for classic analyses.
This was demonstrated with activation maps, functional connectivity maps,
inter-subject correlation.
2. The cneuromod2020 data is of appropriate quality for several brain encoding
and decoding tasks.
This was demonstrated on auditory stimuli.
3. The cneuromod2021 video game data is of appropriate quality for generating
activation maps and behavioral imitation.
Conclusions
37. Grow cneuromod as a community-driven project
○ Several labs outside of the original cneuromod team are now contributing to
task design, modelling and analysis. Thank you!!
○ Join us at mattermost.brainhack.org channel #cneuromod
Next steps
CNeuroMod
2021