AAAI2023「Are Transformers Effective for Time Series Forecasting?」と、HuggingFace「Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer)」の紹介です。
AAAI2023「Are Transformers Effective for Time Series Forecasting?」と、HuggingFace「Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer)」の紹介です。
Dynamic Time Warping を用いた高頻度取引データのLead-Lag 効果の推定Katsuya Ito
This paper investigates the Lead-Lag relationships in high-frequency data.
We propose Multinomial Dynamic Time Warping (MDTW) that deals with non-synchronous observation, vast data, and time-varying Lead-Lag.
MDTW directly estimates the Lead-Lags without lag candidates. Its computational complexity is linear with respect to the number of observation and it does not depend on the number of lag candidates.
The experiments adopting artificial data and market data illustrate the effectiveness of our method compared to the existing methods.
Convex Analysis and Duality (based on "Functional Analysis and Optimization" ...Katsuya Ito
In this presentation, we explain the monograph ”Functional Analysis and Optimization” by Kazufumi Ito
https://kito.wordpress.ncsu.edu/files/2018/04/funa3.pdf
Our goal in this presentation is to
-Understand the basic notions of functional analysis
lower-semicontinuous, subdifferential, conjugate functional
- Understand the formulation of duality problem
primal (P), perturbed (Py), and dual (P∗) problem
-Understand the primal-dual relationships
inf(P)≤sup(P∗), inf(P) = sup(P∗), inf supL≤sup inf L
ICLR 2018 Best papers3本をざっくり紹介したスライドです。
On the convergence of Adam and Beyond
Spherical CNNs
Continuous adaptation via meta-learning in nonstationary and competitive environments
園田翔氏の博士論文を解説しました。
Integral Representation Theory of Deep Neural Networks
深層学習を数学的に定式化して解釈します。
3行でいうと、
ーニューラルネットワーク—(連続化)→双対リッジレット変換
ー双対リッジレット変換=輸送写像
ー輸送写像でNeural Networkを定式化し、解釈する。
目次
ー深層ニューラルネットワークの数学的定式化
ーリッジレット変換について
ー輸送写像について
以下の6つの論文をゼミで紹介した
Progressive Growing of GANs for Improved Quality, Stability, and Variation
Spectral Normalization for Generative Adversarial Networks
cGANs with Projection Discriminator
High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
Are GANs Created Equal? A Large-Scale Study
Improved Training of Wasserstein GANs