Journal club: Quantitative models of neural language representationTakuya Koumura
This document summarizes several papers on quantitative models of neural language representation. It discusses encoding models that use language representations like word embeddings to model brain activity measured by fMRI or MEG in response to linguistic stimuli like words, sentences and stories. The models are evaluated based on their ability to linearly encode and decode from brain activity and have high representational similarity with it. Several papers find that distributional word embeddings can accurately predict brain responses in language areas. Context is also found to improve modeling accuracy compared to individual words. The document analyzes the methods, results and implications of these quantitative models of neural language representation.
Journal club: Quantitative models of neural language representationTakuya Koumura
This document summarizes several papers on quantitative models of neural language representation. It discusses encoding models that use language representations like word embeddings to model brain activity measured by fMRI or MEG in response to linguistic stimuli like words, sentences and stories. The models are evaluated based on their ability to linearly encode and decode from brain activity and have high representational similarity with it. Several papers find that distributional word embeddings can accurately predict brain responses in language areas. Context is also found to improve modeling accuracy compared to individual words. The document analyzes the methods, results and implications of these quantitative models of neural language representation.
The document discusses Vector Quantized Variational Auto Encoder 2 (VQ-VAE2), a generative model that uses discrete latent representations. VQ-VAE2 builds upon VQ-VAE by introducing hierarchical discrete latent variables to generate high-fidelity images at resolutions up to 1024x1024 in 3 sentences or less. VQ-VAE2 uses a neural network architecture with residual and skip connections, sometimes with gating operations, to model discrete latent variables at multiple levels of abstraction for generating diverse, high-quality images.
The document discusses Vector Quantized Variational Auto Encoder 2 (VQ-VAE2), a generative model that uses discrete latent representations. VQ-VAE2 builds upon VQ-VAE by introducing hierarchical discrete latent variables to generate high-fidelity images at resolutions up to 1024x1024 in 3 sentences or less. VQ-VAE2 uses a neural network architecture with residual and skip connections, sometimes with gating operations, to model discrete latent variables at multiple levels of abstraction for generating diverse, high-quality images.