Journal club: Quantitative models of neural language representationTakuya Koumura
This document summarizes several papers on quantitative models of neural language representation. It discusses encoding models that use language representations like word embeddings to model brain activity measured by fMRI or MEG in response to linguistic stimuli like words, sentences and stories. The models are evaluated based on their ability to linearly encode and decode from brain activity and have high representational similarity with it. Several papers find that distributional word embeddings can accurately predict brain responses in language areas. Context is also found to improve modeling accuracy compared to individual words. The document analyzes the methods, results and implications of these quantitative models of neural language representation.
The document discusses Vector Quantized Variational Auto Encoder 2 (VQ-VAE2), a generative model that uses discrete latent representations. VQ-VAE2 builds upon VQ-VAE by introducing hierarchical discrete latent variables to generate high-fidelity images at resolutions up to 1024x1024 in 3 sentences or less. VQ-VAE2 uses a neural network architecture with residual and skip connections, sometimes with gating operations, to model discrete latent variables at multiple levels of abstraction for generating diverse, high-quality images.
Journal club: Quantitative models of neural language representationTakuya Koumura
This document summarizes several papers on quantitative models of neural language representation. It discusses encoding models that use language representations like word embeddings to model brain activity measured by fMRI or MEG in response to linguistic stimuli like words, sentences and stories. The models are evaluated based on their ability to linearly encode and decode from brain activity and have high representational similarity with it. Several papers find that distributional word embeddings can accurately predict brain responses in language areas. Context is also found to improve modeling accuracy compared to individual words. The document analyzes the methods, results and implications of these quantitative models of neural language representation.
The document discusses Vector Quantized Variational Auto Encoder 2 (VQ-VAE2), a generative model that uses discrete latent representations. VQ-VAE2 builds upon VQ-VAE by introducing hierarchical discrete latent variables to generate high-fidelity images at resolutions up to 1024x1024 in 3 sentences or less. VQ-VAE2 uses a neural network architecture with residual and skip connections, sometimes with gating operations, to model discrete latent variables at multiple levels of abstraction for generating diverse, high-quality images.
23. 2018.05.09
Takuya KOUMURA
p. 22
末梢
中枢
聴覚神経系におけるAM表現
Cochlear nucleus (CN)
Superior olivary complex (SOC)
Nucleus of the lateral lemniscus
(NLL)
Medial geniculate body (MGB)
Auditory cortex (AC)
Inferior colliculus (IC)
Auditory nerves (AN)
(Kandel 2000)
24. 2018.05.09
Takuya KOUMURA
p. 23
分布密度
時間coding 頻度coding
AM周波数
聴覚神経系におけるAM表現
Best
frequency
Upper cutoff
frequency
AM 周波数
同期性・平均発火頻度
中枢
末梢同期するAM周波数が減少
頻度codingが途中から出現
AN
CN
SOC
NLL
IC
MGB
AC