A stale version, please check https://www.slideshare.net/akisatokimura/paper-reading-dropout-as-a-bayesian-approximation-representing-model-uncertainty-in-deep-learning-166237519 for a new version.
Introducing the paper "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning" presented in ICML2016 (in Japanese).
技術動向の調査として、ICML Workshop Uncertainty & Robustness in Deep Learningの中で、面白そうなタイトルを中心に読んで各論文を4スライドでまとめました。
最新版:https://speakerdeck.com/masatoto/icml-2021-workshop-shen-ceng-xue-xi-falsebu-que-shi-xing-nituite-e0debbd2-62a7-4922-a809-cb07c5da2d08(文章を修正しました。)
技術動向の調査として、ICML Workshop Uncertainty & Robustness in Deep Learningの中で、面白そうなタイトルを中心に読んで各論文を4スライドでまとめました。
最新版:https://speakerdeck.com/masatoto/icml-2021-workshop-shen-ceng-xue-xi-falsebu-que-shi-xing-nituite-e0debbd2-62a7-4922-a809-cb07c5da2d08(文章を修正しました。)
The document discusses control as inference in Markov decision processes (MDPs) and partially observable MDPs (POMDPs). It introduces optimality variables that represent whether a state-action pair is optimal or not. It formulates the optimal action-value function Q* and optimal value function V* in terms of these optimality variables and the reward and transition distributions. Q* is defined as the log probability of a state-action pair being optimal, and V* is defined as the log probability of a state being optimal. Bellman equations are derived relating Q* and V* to the reward and next state value.
This document discusses generative adversarial networks (GANs) and their relationship to reinforcement learning. It begins with an introduction to GANs, explaining how they can generate images without explicitly defining a probability distribution by using an adversarial training process. The second half discusses how GANs are related to actor-critic models and inverse reinforcement learning in reinforcement learning. It explains how GANs can be viewed as training a generator to fool a discriminator, similar to how policies are trained in reinforcement learning.
Paper reading - Dropout as a Bayesian Approximation: Representing Model Uncer...Akisato Kimura
Introducing the paper "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning" presented in ICML2016 (in Japanese).
Updated version of https://www.slideshare.net/akisatokimura/paper-reading-dropout-as-a-bayesian-approximation-representing-model-uncertainty-in-deep-learning
The document discusses control as inference in Markov decision processes (MDPs) and partially observable MDPs (POMDPs). It introduces optimality variables that represent whether a state-action pair is optimal or not. It formulates the optimal action-value function Q* and optimal value function V* in terms of these optimality variables and the reward and transition distributions. Q* is defined as the log probability of a state-action pair being optimal, and V* is defined as the log probability of a state being optimal. Bellman equations are derived relating Q* and V* to the reward and next state value.
This document discusses generative adversarial networks (GANs) and their relationship to reinforcement learning. It begins with an introduction to GANs, explaining how they can generate images without explicitly defining a probability distribution by using an adversarial training process. The second half discusses how GANs are related to actor-critic models and inverse reinforcement learning in reinforcement learning. It explains how GANs can be viewed as training a generator to fool a discriminator, similar to how policies are trained in reinforcement learning.
Paper reading - Dropout as a Bayesian Approximation: Representing Model Uncer...Akisato Kimura
Introducing the paper "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning" presented in ICML2016 (in Japanese).
Updated version of https://www.slideshare.net/akisatokimura/paper-reading-dropout-as-a-bayesian-approximation-representing-model-uncertainty-in-deep-learning
NIPS2015 reading - Learning visual biases from human imaginationAkisato Kimura
1) The document discusses a paper on improving visual recognition systems by leveraging human visual biases and generating images from random features.
2) It describes estimating visual biases from human psychophysics experiments, then using those biases to reconstruct images from random features. The reconstructed images can then be used to train machine learning models.
3) The document outlines experiments showing that incorporating estimated human visual biases into machine learning models, such as SVMs, can help improve visual recognition performance compared to models trained without biases.
CVPR2015 reading "Global refinement of random forest"Akisato Kimura
- A method is presented for refining a pre-trained random forest by optimizing the leaf weights while keeping the tree structures fixed.
- This reformulates the random forest as a linear classification/regression problem where samples are represented by sparse indicator vectors.
- The optimization can be performed efficiently and the refined forest has comparable or better accuracy than the original forest, but with significantly fewer trees/nodes.
- Experiments on classification and regression datasets demonstrate the proposed method outperforms other random forest techniques while accelerating training and testing.
Computational models of human visual attention driven by auditory cuesAkisato Kimura
This document summarizes a presentation on computational models of human visual attention driven by auditory cues. It discusses how auditory information can modulate visual attention by selecting visual features that are synchronized with detected auditory events. The proposed model uses Bayesian surprise to detect transient events in visual and auditory streams separately, then correlates the two to select synchronized visual features. An evaluation of the model on video clips found it outperformed baseline models at predicting eye movements.
Brief description of the paper "Large-scale visual sentiment ontology and detectors using adjective noun pairs" presented in ACM Multimedia 2013 as a full paper.
Briefly reviews International Conference on Weblogs and Social Media (ICWSM12) from my perspective.
The latter part written in Japanese, sorry for that.