以下の6つの論文をゼミで紹介した
Progressive Growing of GANs for Improved Quality, Stability, and Variation
Spectral Normalization for Generative Adversarial Networks
cGANs with Projection Discriminator
High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
Are GANs Created Equal? A Large-Scale Study
Improved Training of Wasserstein GANs
cvpaper.challenge の Meta Study Group 発表スライド
cvpaper.challenge はコンピュータビジョン分野の今を映し、トレンドを創り出す挑戦です。論文サマリ・アイディア考案・議論・実装・論文投稿に取り組み、凡ゆる知識を共有します。2019の目標「トップ会議30+本投稿」「2回以上のトップ会議網羅的サーベイ」
http://xpaperchallenge.org/cv/
cvpaper.challenge の Meta Study Group 発表スライド
cvpaper.challenge はコンピュータビジョン分野の今を映し、トレンドを創り出す挑戦です。論文サマリ・アイディア考案・議論・実装・論文投稿に取り組み、凡ゆる知識を共有します。2019の目標「トップ会議30+本投稿」「2回以上のトップ会議網羅的サーベイ」
http://xpaperchallenge.org/cv/
For deep learning engineer and server engineer, this slides illustrates how to create VGG image recognition API with Flask, Keras in Python, which is very simple and easy to get started. All code shown in slides can be found in gist
https://gist.github.com/asterisk37n/e139272963f1bd659fcd532a35c59978
This slides is used in a lecture done on June 1, at Shibuya.
Dynamic Time Warping を用いた高頻度取引データのLead-Lag 効果の推定Katsuya Ito
This paper investigates the Lead-Lag relationships in high-frequency data.
We propose Multinomial Dynamic Time Warping (MDTW) that deals with non-synchronous observation, vast data, and time-varying Lead-Lag.
MDTW directly estimates the Lead-Lags without lag candidates. Its computational complexity is linear with respect to the number of observation and it does not depend on the number of lag candidates.
The experiments adopting artificial data and market data illustrate the effectiveness of our method compared to the existing methods.
Convex Analysis and Duality (based on "Functional Analysis and Optimization" ...Katsuya Ito
In this presentation, we explain the monograph ”Functional Analysis and Optimization” by Kazufumi Ito
https://kito.wordpress.ncsu.edu/files/2018/04/funa3.pdf
Our goal in this presentation is to
-Understand the basic notions of functional analysis
lower-semicontinuous, subdifferential, conjugate functional
- Understand the formulation of duality problem
primal (P), perturbed (Py), and dual (P∗) problem
-Understand the primal-dual relationships
inf(P)≤sup(P∗), inf(P) = sup(P∗), inf supL≤sup inf L
ICLR 2018 Best papers3本をざっくり紹介したスライドです。
On the convergence of Adam and Beyond
Spherical CNNs
Continuous adaptation via meta-learning in nonstationary and competitive environments
園田翔氏の博士論文を解説しました。
Integral Representation Theory of Deep Neural Networks
深層学習を数学的に定式化して解釈します。
3行でいうと、
ーニューラルネットワーク—(連続化)→双対リッジレット変換
ー双対リッジレット変換=輸送写像
ー輸送写像でNeural Networkを定式化し、解釈する。
目次
ー深層ニューラルネットワークの数学的定式化
ーリッジレット変換について
ー輸送写像について
7. Bread Company
①Progressive Growing of GANs
●論文:Progressive Growing of GANs for Improved Quality,
Stability, and Variation
https://arxiv.org/abs/1710.10196
目的:高画質・安定・多様な画像生成
手法:GeneratorとDiscriminatorに
徐々に高画質な層を追加してく
Goodfellow: probably the highest quality images so far
7
17. Bread Company
②Spectral Normalization for GANs
●論文:Spectral Normalization for Generative Adversarial Networks
https://arxiv.org/abs/1802.05957
目的:Discriminatorの学習の安定化
手法:Spectral Normで重みを正規化する
Goodfellow:got GANs working on lots of classes, which has been hard
17
24. Bread Company
③cGANs with Projection Discriminator
●論文:cGANs with Projection Discriminator
https://arxiv.org/abs/1802.05637
目的:安定した画像生成
多様な画像生成
カテゴリーを連続的に移動や高画質化も可能にしたい
手法:cGANを改良して、最後にyとの内積をとる
Goodfellow:from the same lab as #2, both techniques work well
together, overall give very good results with 1000 classes
24
47. Bread Company
⑤ Are GANs Created Equal?
●論文:Are GANs Created Equal? A Large-Scale Study
https://arxiv.org/abs/1711.10337
目的:そこまでGAN(original)とX-GANの代わりはない説
FIDのminだけを見ることをへの疑問
その他色々GANの検証
手法:GANs (MM,LS,W,W-GP,BE)-GAN
Metric: FID,InceptionScore,F1
Goodfellow:A big empirical study showing the importance of good
rigorous empirical work and how a lot of the GAN variants don't
seem to actually offer improvements in practice
47
59. Bread Company
⑥Wasserstein GANs - GP
●論文:Improved Training of Wasserstein GANs
https://arxiv.org/abs/1704.00028
目的:Wasserstein GANの安定化
手法:WGANに勾配が1になるようにペナルティを化す
Goodfellow:probably the most popular GAN variant today and seems
to be pretty good in my opinion. Caveat: the baseline GAN variants
should not perform nearly as badly as this paper claims, especially
the text one
59