This document discusses methods for automated machine learning (AutoML) and optimization of hyperparameters. It focuses on accelerating the Nelder-Mead method for hyperparameter optimization using predictive parallel evaluation. Specifically, it proposes using a Gaussian process to model the objective function and perform predictive evaluations in parallel to reduce the number of actual function evaluations needed by the Nelder-Mead method. The results show this approach reduces evaluations by 49-63% compared to baseline methods.
QIQB(大阪大学先導的学際研究機構量子情報・量子生命研究部門)セミナー でのスライドを加筆したもの。量子コンピュータを用いた量子化学計算の現在の状況と展望を述べた.
伝統的なゲート式位相推定による方法とvariational eigen solverによるものと2つ。ごく最近虚時間発展法の実装もされており、それは別スライドで概観した。
AAAI2023「Are Transformers Effective for Time Series Forecasting?」と、HuggingFace「Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer)」の紹介です。
This document discusses methods for automated machine learning (AutoML) and optimization of hyperparameters. It focuses on accelerating the Nelder-Mead method for hyperparameter optimization using predictive parallel evaluation. Specifically, it proposes using a Gaussian process to model the objective function and perform predictive evaluations in parallel to reduce the number of actual function evaluations needed by the Nelder-Mead method. The results show this approach reduces evaluations by 49-63% compared to baseline methods.
QIQB(大阪大学先導的学際研究機構量子情報・量子生命研究部門)セミナー でのスライドを加筆したもの。量子コンピュータを用いた量子化学計算の現在の状況と展望を述べた.
伝統的なゲート式位相推定による方法とvariational eigen solverによるものと2つ。ごく最近虚時間発展法の実装もされており、それは別スライドで概観した。
AAAI2023「Are Transformers Effective for Time Series Forecasting?」と、HuggingFace「Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer)」の紹介です。
[2010]
Large-scale Image Classification: Fast Feature Extraction and SVM Training
[2011]
High-dimensional signature compression for large-scale image classification
Realization of Innovative Light Energy Conversion Materials utilizing the Sup...RCCSRENKEI
The document is very short and does not provide much context. It contains only one word: "Real". From this limited information, it is difficult to construct an informative summary in 3 sentences or less.
Current status of the project "Toward a unified view of the universe: from la...RCCSRENKEI
This document summarizes the current status of a large, multi-institutional project in Japan aimed at developing a unified understanding of structure formation in the universe through multi-level simulations and observations. The project involves over 90 researchers across 21 institutions. It is divided into four sub-projects focusing on: large-scale structures and galaxy formation (Sub A); molecular clouds and planetary formation (Sub B); black holes, supernovae, and radiation transport (Sub C); and the solar system, Venus, and gas giant planets (Sub D). Several key simulations have been performed achieving unprecedented resolution, including galaxy formation at the star-by-star level, globular cluster dynamics, and a 12.8 billion point simulation of solar conve
Fugaku, the Successes and the Lessons LearnedRCCSRENKEI
The document summarizes the successes and lessons learned from Fugaku, Japan's flagship supercomputer. Key points include:
- Fugaku achieved the top performance on all HPC benchmarks in 2020 and 2021, showing high performance across applications, not just traditional HPC workloads.
- While many applications achieved their target performance, some did not due to issues like insufficient parallelism, I/O scalability problems, and compiler vectorization failures.
- Lessons include the need for improved software stacks, application analysis, and adapting to modern applications beyond classic HPC.
- Looking ahead, sustained exascale performance will require data-centric architectures and corresponding system software and algorithms as transistor scaling slow
Molecular dynamics (MD) is a very useful tool to understand various phenomena in atomistic detail. In MD, we can overcome the size- and time-scale problems by efficient parallelization. In this lecture, I’ll explain various parallelization methods of MD with some examples of GENESIS MD software optimization on Fugaku.
This document discusses optimizations for deep learning frameworks on Intel CPUs and Fugaku processors. It introduces oneDNN, an Intel performance library for deep neural networks. JIT assembly using Xbyak is proposed to generate optimized code depending on parameters at runtime. Xbyak has been extended to AArch64 as Xbyak_aarch64 to support Fugaku. AVX-512 SIMD instructions are briefly explained.
This document discusses methods for combining molecular simulations and experimental measurement data. It introduces Bayesian modeling, which treats simulations and measurements within a general probabilistic framework. This allows incorporating measurement data to improve simulation models. Specific techniques covered include Bayesian modeling of equilibrium ensembles to refine protein structures, and maximum entropy methods to directly incorporate measurement data into molecular dynamics simulations. It also discusses machine learning approaches for dynamic processes.