In slide #25~26, Linear alignment -> Feedback alignment
Presentation for ICML2019 reading pitch @ Kyoto 4th August 2019. Shuntaro Ohno introduced "Training Neural Networks with Local Error Signals" in Japanese.
In slide #25~26, Linear alignment -> Feedback alignment
Presentation for ICML2019 reading pitch @ Kyoto 4th August 2019. Shuntaro Ohno introduced "Training Neural Networks with Local Error Signals" in Japanese.
Paper reading - Dropout as a Bayesian Approximation: Representing Model Uncer...Akisato Kimura
Introducing the paper "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning" presented in ICML2016 (in Japanese).
Updated version of https://www.slideshare.net/akisatokimura/paper-reading-dropout-as-a-bayesian-approximation-representing-model-uncertainty-in-deep-learning
Paper reading - Dropout as a Bayesian Approximation: Representing Model Uncer...Akisato Kimura
A stale version, please check https://www.slideshare.net/akisatokimura/paper-reading-dropout-as-a-bayesian-approximation-representing-model-uncertainty-in-deep-learning-166237519 for a new version.
Introducing the paper "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning" presented in ICML2016 (in Japanese).
NIPS2015 reading - Learning visual biases from human imaginationAkisato Kimura
1) The document discusses a paper on improving visual recognition systems by leveraging human visual biases and generating images from random features.
2) It describes estimating visual biases from human psychophysics experiments, then using those biases to reconstruct images from random features. The reconstructed images can then be used to train machine learning models.
3) The document outlines experiments showing that incorporating estimated human visual biases into machine learning models, such as SVMs, can help improve visual recognition performance compared to models trained without biases.
CVPR2015 reading "Global refinement of random forest"Akisato Kimura
- A method is presented for refining a pre-trained random forest by optimizing the leaf weights while keeping the tree structures fixed.
- This reformulates the random forest as a linear classification/regression problem where samples are represented by sparse indicator vectors.
- The optimization can be performed efficiently and the refined forest has comparable or better accuracy than the original forest, but with significantly fewer trees/nodes.
- Experiments on classification and regression datasets demonstrate the proposed method outperforms other random forest techniques while accelerating training and testing.
Computational models of human visual attention driven by auditory cuesAkisato Kimura
This document summarizes a presentation on computational models of human visual attention driven by auditory cues. It discusses how auditory information can modulate visual attention by selecting visual features that are synchronized with detected auditory events. The proposed model uses Bayesian surprise to detect transient events in visual and auditory streams separately, then correlates the two to select synchronized visual features. An evaluation of the model on video clips found it outperformed baseline models at predicting eye movements.
Brief description of the paper "Large-scale visual sentiment ontology and detectors using adjective noun pairs" presented in ACM Multimedia 2013 as a full paper.
Briefly reviews International Conference on Weblogs and Social Media (ICWSM12) from my perspective.
The latter part written in Japanese, sorry for that.