The document provides a self-introduction by Takigawa Ichigaku, who specializes in machine learning and data-driven natural science research, particularly those involving discrete structures. It outlines his work experience and current affiliations with RIKEN and Hokkaido University. It then previews the topics to be covered in the talk, including machine learning applications in molecular representation and chemical reaction design, as well as challenges in interpreting machine learning models.
Tensor Decomposition and its ApplicationsKeisuke OTAKI
This document discusses tensor factorizations and decompositions and their applications in data mining. It introduces tensors as multi-dimensional arrays and covers 2nd order tensors (matrices) and 3rd order tensors. It describes how tensor decompositions like the Tucker model and CANDECOMP/PARAFAC (CP) model can be used to decompose tensors into core elements to interpret data. It also discusses singular value decomposition (SVD) as a way to decompose matrices and reduce dimensions while approximating the original matrix.
This document discusses Smith charts and impedance matching. It begins with an introduction to resonators, Q factor, and resonant bandwidth. It then covers basic impedance matching networks including L, T, and π networks. The document explains how to use Smith charts to represent LC circuits and perform impedance matching. It also discusses loaded Q versus unloaded Q and how to match impedances for different cases. Matching bandwidth is defined and conversions between series and parallel circuits are covered. The document provides an overview of important concepts regarding resonators, Q factor, impedance matching, and the use of Smith charts.
第2回nips+読み会: Learning to learn by gradient decent by gradient decentTaku Tsuzuki
第2回NIPS読み会の発表資料です.learning to learn by gradient decent by gradient decent. OptimizerをLSTMとして表現し,逆誤差伝播によりそれを最適化. 目的関数の成分ごと独立に,パラメタを共有したLSTMで最適化を行うことで最適化すべきOptimizerのパラメタ数を小さく抑える.
The document provides a self-introduction by Takigawa Ichigaku, who specializes in machine learning and data-driven natural science research, particularly those involving discrete structures. It outlines his work experience and current affiliations with RIKEN and Hokkaido University. It then previews the topics to be covered in the talk, including machine learning applications in molecular representation and chemical reaction design, as well as challenges in interpreting machine learning models.
Tensor Decomposition and its ApplicationsKeisuke OTAKI
This document discusses tensor factorizations and decompositions and their applications in data mining. It introduces tensors as multi-dimensional arrays and covers 2nd order tensors (matrices) and 3rd order tensors. It describes how tensor decompositions like the Tucker model and CANDECOMP/PARAFAC (CP) model can be used to decompose tensors into core elements to interpret data. It also discusses singular value decomposition (SVD) as a way to decompose matrices and reduce dimensions while approximating the original matrix.
This document discusses Smith charts and impedance matching. It begins with an introduction to resonators, Q factor, and resonant bandwidth. It then covers basic impedance matching networks including L, T, and π networks. The document explains how to use Smith charts to represent LC circuits and perform impedance matching. It also discusses loaded Q versus unloaded Q and how to match impedances for different cases. Matching bandwidth is defined and conversions between series and parallel circuits are covered. The document provides an overview of important concepts regarding resonators, Q factor, impedance matching, and the use of Smith charts.
第2回nips+読み会: Learning to learn by gradient decent by gradient decentTaku Tsuzuki
第2回NIPS読み会の発表資料です.learning to learn by gradient decent by gradient decent. OptimizerをLSTMとして表現し,逆誤差伝播によりそれを最適化. 目的関数の成分ごと独立に,パラメタを共有したLSTMで最適化を行うことで最適化すべきOptimizerのパラメタ数を小さく抑える.