* Satoshi Hara and Kohei Hayashi. Making Tree Ensembles Interpretable: A Bayesian Model Selection Approach. AISTATS'18 (to appear).
arXiv ver.: https://arxiv.org/abs/1606.09066#
* GitHub
https://github.com/sato9hara/defragTrees
First part shows several methods to sample points from arbitrary distributions. Second part shows application to population genetics to infer population size and divergence time using obtained sequence data.
Bayesian Generalization Error and Real Log Canonical Threshold in Non-negativ...Naoki Hayashi
I have talked in the conference Algebraic Statistics 2020.
As a background of our research, I briefly explained singular learning theory which can be interpretable as an intersection between algebraic statistics and statistical learning theory.
The main part of this presentation is introducing our recent studies for parameter region restriction in singular learning theory. I showed the researches about the learning coefficient (real log canonical threshold) of NMF and LDA. NMF and LDA are typical models whose parameter regions are restricted.
2018年3月のニューロコンピューティング研究会にて発表.確率行列分解のBayes汎化誤差に関する理論的な不等式について数値実験を試みたかったが,そもそもBayes推定をすることが困難な問題であった:パラメータが単体(simplex)上に存在するために,事後分布からサンプリングを行うことが難しい.そこで本研究ではハミルトニアンモンテカルロ法という効率的なMCMC法を用いてBayes推定をしてみた.理論値と比較し,確率行列分解に対するハミルトニアンモンテカルロ法の有効性を検証した.in Japanese
This research was published in IEEE SSCI 2017 in Hawaii.
The research goal was constructing learning theory of Non-negative Matrix Factorization and we derived a tighter upper bound of the generalization error than our previous research. Moreover, we carried out numerical experiments and discovered a conjecture that showed the exact value of the generalization error.
28. 文献
[1] Aoyagi, M & Watanabe, S. Stochastic complexities of reduced rank regression in
Bayesian estimation. Neural Netw. 2005;18(7):924–33.
[2] Drton, M & Plummer, M. A Bayesian information criterion for singular models. J R
Stat Soc B. 2017;79:323–80 with discussion.
[3] H, N & Watanabe, S. (2017). Tighter upper bound of real log canonical threshold of
non-negative matrix factorization and its application to Bayesian inference. In IEEE
symposium series on computational intelligence (IEEE SSCI). (pp. 718–725).
[4] H, N & Watanabe, S. (2020a). Asymptotic Bayesian generalization error in latent
Dirichlet allocation. SN Computer Science. 2020;1(69):1-22.
[5] H, N. (2020b). Variational approximation error in non-negative matrix factorization.
Neural Netw. 2020;126(June):65-75.
[6] Nakada, R & Imaizumi, M. Adaptive approximation and generalization of deep
neural network with Intrinsic dimensionality. JMLR. 2020;21(174):1-38.
[7] Watanabe, S. Algebraic geometrical methods for hierarchical learning machines.
Neural Netw. 2001;13(4):1049–60.
[8] Watanabe, S. Mathematical theory of Bayesian statistics. Florida: CR Press. 2018.
[9] Yamazaki, K & Watanabe, S. Singularities in mixture models and upper bounds of
stochastic complexity. Neural Netw. 2003;16(7):1029–38.
[10] Zwiernik P. An asymptotic behaviour of the marginal likelihood for general Markov
models. J Mach Learn Res. 2011;12(Nov):3283–310.
28