SlideShare a Scribd company logo
1 of 12
Download to read offline
University of Tsukuba | Center for Computational Sciences
https://www.ccs.tsukuba.ac.jp/
Division of Astrophysics
Black-hole physics
We apply fundamental physics to investigate a wide variety of astrophysical phenomena, including the birth and evolution of the first
stars and galaxies in the universe, the characteristics of the light they emit, the formation and evolution of galaxies, galaxy clusters, and
other large-scale structures, the formation and evolution of black holes and their jets and radiation, the formation of planets, and the origin
of life in the universe. We are also researching how technology developed in this field may find applications in medicine.
In the accretion disk, where matter accumulates owing to the strong gravity of
the black hole, powerful radiation and jets are generated through the release of
gravitational energy. We are investigating high-energy astrophysical phenomena
around black holes and the radiation from black-hole phenomena through
general-relativistic radiation-magnetohydrodynamic simulations.
Galaxy formation
In the early universe, massive galaxies and supermassive black
holes may have formed through frequent merging of galaxies and
rapid gas accretion in primordial galaxy
clusters. We are studying physical
phenomena occurring in primordial
galaxy clusters through cosmological
radiation-hydrodynamic simulations.
Fig. 7 Radiative transfer in live
tissue irradiated by pulsed light
Fig.1
Left: Black hole accretion disks and jets in general
relativistic radiative magnetohydrodynamic simulations.
Right: Black hole shadow reproduced by general relativistic
radiative transfer calculations
Fig.5 Upper left: Large-scale matter distribution in the universe.
Upper right: Structure of gas near the central galaxy in a proto-galaxy cluster.
Bottom: From left to right, galactic gas, heavy elements, and stellar surface density.
Fig. 4 A galaxy collision that
occurred in the Andromeda Galaxy
approximately 1 billion years ago
Fig. 6 Left: Large-scale structure with neutrinos.
Right: Large-scale structure without neutrinos.
Chief: Ohsuga Ken, Professor, Ph.D.
Neutrinos and the formation of large-scale structures in the universe
Fig. 3 Interaction between interstellar gas and jets
Supermassive black holes and galaxy evolution
Every galaxy harbors a supermassive black hole at its center that grows by swallowing stars and
interstellar matter and simultaneously launches powerful jets into the galaxy that blow away interstellar
gas and regulates star formation and black hole growth. We are investigating these complex cycles in
galaxy evolution through relativistic magnetohydrodynamic simulations.
Galaxy evolution and dark matter
Dark matter is known to play an
important role in the evolution of
galaxies, but its nature remains
shrouded in mystery, and many
contradictions within existing theories
persist. We are investigating the nature
of dark matter haloes by studying
galaxy collisions.
We have conducted the world’ s first high-precision Vlasov simulations of the large-scale
structure in the universe that include neutrinos and that outperform conventional N-body
simulation. The simulations can be used in conjunction with future observational projects of
large-scale galaxy survey to constrain the neutrino mass.
Applications of astrophysical techniques in medicine
Diffuse optical tomography (DOT) using near-infrared light in the wavelength range of 700-1000 nm has
been garnering attention as a new medical diagnostic method because it is non-invasive and does not
expose the patient to radiation. We are studying light propagation in living organisms by applying
high-precision techniques for radiative transfer calculation developed in astrophysics to DOT.
Fig. 2 Cloud dispersal by radiative feedback in star cluster formation
Star cluster formation
Most stars are born as members of star clusters in the Universe. The physical processes in
star-cluster formation determine the properties of the stars. We are studying star-cluster
formation and cloud dispersal by radiative feedback using radiation hydrodynamics simulations.
University of Tsukuba | Center for Computational Sciences
https://www.ccs.tsukuba.ac.jp/
Division of High Performance Computing Systems
High-performance, massively parallel numerical algorithms
The High-Performance Computing Systems Division conducts research and development of various hardware and software for High
Performance Computing (HPC) to satisfy the demand for ultrafast and high-capacity computing to promote cutting-edge computational
science. In collaboration with the Center’ s teams in domain science fields, we aim to provide the ideal HPC systems for real-world
problems.
Research targets a variety of fields, such as high-performance computer architecture, parallel programming languages, large-scale
parallel numerical calculation algorithms and libraries, computational acceleration systems such as Graphics Processing Units (GPUs),
Field Programmable Gate Array (FPGA), large-scale distributed storage systems, and grid/cloud environments.
Chief: BOKU Taisuke, Professor, Ph.D., Director of CCS
Fig. 1 Performance of Parallel 3-D FFTs on Oakforest-PACS (2048 nodes)
N=256×512×512×MPI processes
Fig. 2 solution time of saddle point problems
We are developing high-performance, massively parallel numerical algorithms.One of these is the Fast Fourier Transform (FFT) library.
FFTE -- an open-source high-performance parallel FFT library developed by our division -- has an auto-tuning mechanism and is
suitable for a wide range of systems (from PC clusters to massively parallel systems).It includes real, complex, mixed-radix and parallel
transform.
We are also developing large-scale linear computation algorithms. We constructed a method using the matrix structure of saddle
point problems that enables faster solutions than existing methods.
GPU computing
GPUs were originally designed as accelerators with a large number of arithmetic units for image processing, but they are also able to
perform general-purpose calculations. GPUs have been used to accelerate applications but are now used to perform larger-scale
calculations, often with longer execution times. In shared systems such as supercomputers, there are execution time limits, but time limits
can be exceeded using a technique called checkpointing, which is used to save the application state to a file in the middle of an execution
so that the application can be resumed later.
Because the checkpointing technique for applications using only the central processing unit (CPU) is well established, we extended it to
the GPU. The GPU has its own memory separate from the CPU, and GPU applications control the GPU with dedicated application
programming interface (API) functions from the CPU. We monitor all the calls to these API functions, collect the necessary information to
resume execution, and store it in the CPU memory. Additionally, in the pre-processing of saving the checkpoint, the data in the GPU’ s
memory is collected on the CPU side so that they can be saved together in a file. Because we cannot increase the execution time to
enable checkpoints, we must minimize the overhead of monitoring the API. It is important to minimize the number of API functions to be
monitored and to save only the data that are needed.
University of Tsukuba | Center for Computational Sciences
https://www.ccs.tsukuba.ac.jp/
Project Office for Exascale Computing System Research and Development
Head: BOKU Taisuke, Professor, Ph.D., Director of CCS
Developing advanced fundamental technologies for ExaFLOPS and beyond
A parallel supercomputer system’ s peak computing performance is expressed as follows: node (processor) performance × number of
nodes. Thus far, performance improvement has been pursued by increasing the number of nodes, but owing to problems such as power
consumption and a high failure rate, simply increasing the number of nodes as a means of performance improvement is approaching its limit.
The world’ s highest performing supercomputers are reaching the exascale, but to reach this level and beyond, it is necessary to establish
fault-tolerant technology for hundreds of thousands to millions of nodes and to enhance the computing performance of individual nodes to the
100TFLOPS level. To achieve the latter, a computation acceleration mechanism is promising, which would enable strong scaling of the
simulation time per step.
In the Next Generation Computing Systems Laboratory, we are conducting research on fundamental technologies for the next generation of
HPC systems based on the concept of multi-hybrid accelerated supercomputing (MHAS). FPGAs play a central role in this research and are
applied to the integration of computation and communication against the backdrop of dramatic improvements in the computation and
communication performance in these devices. Since 2017, we have been developing and expanding PPX (Pre-PACS-X), a mini-cluster for
demonstration experiments based on the idea of using FPGAs as complementary tools to achieve the ideal acceleration of computation for
applications where the conventional combination of CPU and GPU is insufficient. Cygnus—the 10th generation supercomputer of the PACS
series—was developed as a practical application system and began operating in the 2019 academic year. The following is an introduction to
the main technologies being developed here.
In collaboration with the Division of High-Performance Computing Systems and Division of Astrophysics, we developed the ARGOT code,
which includes the radiation transport problem in the simulation of early-universe object formation, so that it would run fast on a tightly coupled
GPU and FGPA platform. The entire ARGOT code can now be run on a 1-node GPU+FPGA co-computation, which is up to 17 times faster than
a GPU-only computation. Additionally, we developed a DMA engine that enables high-speed communication between the GPU and FPGAs
without using the CPU. Furthermore, through joint research with the Division of Global Environmental Science and the Division of Quantum
Condensed Matter Physics, we are working on GPU/FPGA acceleration of application codes being independently developed at JCAHPC.
We developed CIRCUS (Communication Integrated
Reconfigurable CompUting System) as a communication
framework that facilitates the use of FPGA-to-FPGA networks
with physical performance levels of 100 Gbps. It is easy to use
at the user level and achieved high-performance
communication (90 Gbps—90% of the theoretical peak
performance of 100 Gbps) on the Cygnus supercomputer.
Additionally, we applied this framework to the FPGA offload
portion of the ARGOT code and confirmed that parallel FPGA
processing can be performed smoothly.
In collaboration with the Division of High-Performance
Computing Systems and Division of Astrophysics, we
developed the ARGOT code, which includes the radiation
transport problem in the simulation of early-universe object
formation, so that it would run fast on a tightly coupled GPU
and FGPA platform. The entire ARGOT code can now be run
on a 1-node GPU+FPGA co-computation, which is up to 17
times faster than a GPU-only computation. Additionally, we
developed a DMA engine that enables high-speed
communication between the GPU and FPGAs
Developing applications that jointly use FPGAs and the GPU
OpenCL interface for FPGA-to-FPGA
high-speed optical networks
The figure below shows an overview of the multi-node parallelization of the
GPU-FPGA ARGOT code. While each node of Cygnus is equipped with
multiple GPUs and FPGAs, current version of code applies single GPU and
FPGA for each MPI process to be assigned to a node. The ART method runs
on the FPGA while the rest of the code including ARGOT method runs on the
GPU. Two communication channels among multiple nodes are used where
GPUs and CPUs communicate via MPI and inter-FPGA communication is
taken by CIRCUS, to create a large computation and communication pipeline
over multiple FPGAs. The original ARGOT code for GPU is written by CUDA
+ MPI, and we enhanced it with OpenCL programing for FPGA with CIRCUS
feature.
Parallelized ARGOT code on
GPUs & FPGAs
Unified GPU–FPGA programming
environment
Left: GPU+FPGA co-computation performance in early-universe
object-formation simulation
Right: Overview parallelized ARGOT code on GPUs and FPGAs cooperability
Tsukuba Astrophysics Research Black Holes Galaxy Evolution
Tsukuba Astrophysics Research Black Holes Galaxy Evolution

More Related Content

Similar to Tsukuba Astrophysics Research Black Holes Galaxy Evolution

The Interplay of Workflow Execution and Resource Provisioning
The Interplay of Workflow Execution and Resource ProvisioningThe Interplay of Workflow Execution and Resource Provisioning
The Interplay of Workflow Execution and Resource ProvisioningRafael Ferreira da Silva
 
Metron seas collaboration
Metron seas collaborationMetron seas collaboration
Metron seas collaborationikekala
 
IRJET- Deep Convolution Neural Networks for Galaxy Morphology Classification
IRJET- Deep Convolution Neural Networks for Galaxy Morphology ClassificationIRJET- Deep Convolution Neural Networks for Galaxy Morphology Classification
IRJET- Deep Convolution Neural Networks for Galaxy Morphology ClassificationIRJET Journal
 
Exascale Computing Project Update
Exascale Computing Project UpdateExascale Computing Project Update
Exascale Computing Project Updateinside-BigData.com
 
Summary of Professional Background and Research Objectives
Summary of Professional Background and Research ObjectivesSummary of Professional Background and Research Objectives
Summary of Professional Background and Research ObjectivesSuresh Phansalkar
 
Summary of Professional Background and Research Objectives
Summary of Professional Background and Research ObjectivesSummary of Professional Background and Research Objectives
Summary of Professional Background and Research ObjectivesSuresh Phansalkar
 
2023comp90024_Spartan.pdf
2023comp90024_Spartan.pdf2023comp90024_Spartan.pdf
2023comp90024_Spartan.pdfLevLafayette1
 
Quantum communication and quantum computing
Quantum communication and quantum computingQuantum communication and quantum computing
Quantum communication and quantum computingIOSR Journals
 
Poster Session 18 ESTEC - Lunar Sample Return Workshop - February 2014
Poster Session 18 ESTEC - Lunar Sample Return Workshop - February 2014Poster Session 18 ESTEC - Lunar Sample Return Workshop - February 2014
Poster Session 18 ESTEC - Lunar Sample Return Workshop - February 2014Dinis Ribeiro
 
NASA Advanced Computing Environment for Science & Engineering
NASA Advanced Computing Environment for Science & EngineeringNASA Advanced Computing Environment for Science & Engineering
NASA Advanced Computing Environment for Science & Engineeringinside-BigData.com
 
A Randomized Load Balancing Algorithm In Grid using Max Min PSO Algorithm
A Randomized Load Balancing Algorithm In Grid using Max Min PSO Algorithm A Randomized Load Balancing Algorithm In Grid using Max Min PSO Algorithm
A Randomized Load Balancing Algorithm In Grid using Max Min PSO Algorithm IJORCS
 
PNNL April 2011 ogce
PNNL April 2011 ogcePNNL April 2011 ogce
PNNL April 2011 ogcemarpierc
 
Promoting a Joint EU-BR Digital Future - High Performance Computing
Promoting a Joint EU-BR Digital Future - High Performance ComputingPromoting a Joint EU-BR Digital Future - High Performance Computing
Promoting a Joint EU-BR Digital Future - High Performance ComputingATMOSPHERE .
 
Sgg crest-presentation-final
Sgg crest-presentation-finalSgg crest-presentation-final
Sgg crest-presentation-finalmarpierc
 
Strong field science core proposal for uph ill site
Strong field science core proposal for uph ill siteStrong field science core proposal for uph ill site
Strong field science core proposal for uph ill siteahsanrabbani
 

Similar to Tsukuba Astrophysics Research Black Holes Galaxy Evolution (20)

The Interplay of Workflow Execution and Resource Provisioning
The Interplay of Workflow Execution and Resource ProvisioningThe Interplay of Workflow Execution and Resource Provisioning
The Interplay of Workflow Execution and Resource Provisioning
 
Metron seas collaboration
Metron seas collaborationMetron seas collaboration
Metron seas collaboration
 
cug2011-praveen
cug2011-praveencug2011-praveen
cug2011-praveen
 
IRJET- Deep Convolution Neural Networks for Galaxy Morphology Classification
IRJET- Deep Convolution Neural Networks for Galaxy Morphology ClassificationIRJET- Deep Convolution Neural Networks for Galaxy Morphology Classification
IRJET- Deep Convolution Neural Networks for Galaxy Morphology Classification
 
Exascale Computing Project Update
Exascale Computing Project UpdateExascale Computing Project Update
Exascale Computing Project Update
 
Summary of Professional Background and Research Objectives
Summary of Professional Background and Research ObjectivesSummary of Professional Background and Research Objectives
Summary of Professional Background and Research Objectives
 
Summary of Professional Background and Research Objectives
Summary of Professional Background and Research ObjectivesSummary of Professional Background and Research Objectives
Summary of Professional Background and Research Objectives
 
2023comp90024_Spartan.pdf
2023comp90024_Spartan.pdf2023comp90024_Spartan.pdf
2023comp90024_Spartan.pdf
 
Collins seattle-2014-final
Collins seattle-2014-finalCollins seattle-2014-final
Collins seattle-2014-final
 
Research on Blue Waters
Research on Blue WatersResearch on Blue Waters
Research on Blue Waters
 
Quantum communication and quantum computing
Quantum communication and quantum computingQuantum communication and quantum computing
Quantum communication and quantum computing
 
PggLas12
PggLas12PggLas12
PggLas12
 
Poster Session 18 ESTEC - Lunar Sample Return Workshop - February 2014
Poster Session 18 ESTEC - Lunar Sample Return Workshop - February 2014Poster Session 18 ESTEC - Lunar Sample Return Workshop - February 2014
Poster Session 18 ESTEC - Lunar Sample Return Workshop - February 2014
 
NASA Advanced Computing Environment for Science & Engineering
NASA Advanced Computing Environment for Science & EngineeringNASA Advanced Computing Environment for Science & Engineering
NASA Advanced Computing Environment for Science & Engineering
 
A Randomized Load Balancing Algorithm In Grid using Max Min PSO Algorithm
A Randomized Load Balancing Algorithm In Grid using Max Min PSO Algorithm A Randomized Load Balancing Algorithm In Grid using Max Min PSO Algorithm
A Randomized Load Balancing Algorithm In Grid using Max Min PSO Algorithm
 
OSPEN: an open source platform for emulating neuromorphic hardware
OSPEN: an open source platform for emulating neuromorphic hardwareOSPEN: an open source platform for emulating neuromorphic hardware
OSPEN: an open source platform for emulating neuromorphic hardware
 
PNNL April 2011 ogce
PNNL April 2011 ogcePNNL April 2011 ogce
PNNL April 2011 ogce
 
Promoting a Joint EU-BR Digital Future - High Performance Computing
Promoting a Joint EU-BR Digital Future - High Performance ComputingPromoting a Joint EU-BR Digital Future - High Performance Computing
Promoting a Joint EU-BR Digital Future - High Performance Computing
 
Sgg crest-presentation-final
Sgg crest-presentation-finalSgg crest-presentation-final
Sgg crest-presentation-final
 
Strong field science core proposal for uph ill site
Strong field science core proposal for uph ill siteStrong field science core proposal for uph ill site
Strong field science core proposal for uph ill site
 

More from PC Cluster Consortium

PCCC23:SCSK株式会社 テーマ1「『Azure OpenAI Service』導入支援サービス」
PCCC23:SCSK株式会社 テーマ1「『Azure OpenAI Service』導入支援サービス」PCCC23:SCSK株式会社 テーマ1「『Azure OpenAI Service』導入支援サービス」
PCCC23:SCSK株式会社 テーマ1「『Azure OpenAI Service』導入支援サービス」PC Cluster Consortium
 
PCCC23:日本AMD株式会社 テーマ2「AMD EPYC™ プロセッサーを用いたAIソリューション」
PCCC23:日本AMD株式会社 テーマ2「AMD EPYC™ プロセッサーを用いたAIソリューション」PCCC23:日本AMD株式会社 テーマ2「AMD EPYC™ プロセッサーを用いたAIソリューション」
PCCC23:日本AMD株式会社 テーマ2「AMD EPYC™ プロセッサーを用いたAIソリューション」PC Cluster Consortium
 
PCCC23:富士通株式会社 テーマ1「次世代高性能・省電力プロセッサ『FUJITSU-MONAKA』」
PCCC23:富士通株式会社 テーマ1「次世代高性能・省電力プロセッサ『FUJITSU-MONAKA』」PCCC23:富士通株式会社 テーマ1「次世代高性能・省電力プロセッサ『FUJITSU-MONAKA』」
PCCC23:富士通株式会社 テーマ1「次世代高性能・省電力プロセッサ『FUJITSU-MONAKA』」PC Cluster Consortium
 
PCCC23:東京大学情報基盤センター 「Society5.0の実現を目指す『計算・データ・学習』の融合による革新的スーパーコンピューティング」
PCCC23:東京大学情報基盤センター 「Society5.0の実現を目指す『計算・データ・学習』の融合による革新的スーパーコンピューティング」PCCC23:東京大学情報基盤センター 「Society5.0の実現を目指す『計算・データ・学習』の融合による革新的スーパーコンピューティング」
PCCC23:東京大学情報基盤センター 「Society5.0の実現を目指す『計算・データ・学習』の融合による革新的スーパーコンピューティング」PC Cluster Consortium
 
PCCC23:日本AMD株式会社 テーマ1「AMD Instinct™ アクセラレーターの概要」
PCCC23:日本AMD株式会社 テーマ1「AMD Instinct™ アクセラレーターの概要」PCCC23:日本AMD株式会社 テーマ1「AMD Instinct™ アクセラレーターの概要」
PCCC23:日本AMD株式会社 テーマ1「AMD Instinct™ アクセラレーターの概要」PC Cluster Consortium
 
PCCC23:富士通株式会社 テーマ3「Fujitsu Computing as a Service (CaaS)」
PCCC23:富士通株式会社 テーマ3「Fujitsu Computing as a Service (CaaS)」PCCC23:富士通株式会社 テーマ3「Fujitsu Computing as a Service (CaaS)」
PCCC23:富士通株式会社 テーマ3「Fujitsu Computing as a Service (CaaS)」PC Cluster Consortium
 
PCCC23:日本オラクル株式会社 テーマ1「OCIのHPC基盤技術と生成AI」
PCCC23:日本オラクル株式会社 テーマ1「OCIのHPC基盤技術と生成AI」PCCC23:日本オラクル株式会社 テーマ1「OCIのHPC基盤技術と生成AI」
PCCC23:日本オラクル株式会社 テーマ1「OCIのHPC基盤技術と生成AI」PC Cluster Consortium
 
PCCC23:筑波大学計算科学研究センター テーマ1「スーパーコンピュータCygnus / Pegasus」
PCCC23:筑波大学計算科学研究センター テーマ1「スーパーコンピュータCygnus / Pegasus」PCCC23:筑波大学計算科学研究センター テーマ1「スーパーコンピュータCygnus / Pegasus」
PCCC23:筑波大学計算科学研究センター テーマ1「スーパーコンピュータCygnus / Pegasus」PC Cluster Consortium
 
PCCC23:Pacific Teck Japan テーマ1「データがデータを生む時代に即したストレージソリューション」
PCCC23:Pacific Teck Japan テーマ1「データがデータを生む時代に即したストレージソリューション」PCCC23:Pacific Teck Japan テーマ1「データがデータを生む時代に即したストレージソリューション」
PCCC23:Pacific Teck Japan テーマ1「データがデータを生む時代に即したストレージソリューション」PC Cluster Consortium
 
PCCC23:株式会社計算科学 テーマ1「VRシミュレーションシステム」
PCCC23:株式会社計算科学 テーマ1「VRシミュレーションシステム」PCCC23:株式会社計算科学 テーマ1「VRシミュレーションシステム」
PCCC23:株式会社計算科学 テーマ1「VRシミュレーションシステム」PC Cluster Consortium
 
PCCC22:株式会社アックス テーマ1「俺ASICとロボットと論理推論AI」
PCCC22:株式会社アックス テーマ1「俺ASICとロボットと論理推論AI」PCCC22:株式会社アックス テーマ1「俺ASICとロボットと論理推論AI」
PCCC22:株式会社アックス テーマ1「俺ASICとロボットと論理推論AI」PC Cluster Consortium
 
PCCC22:日本AMD株式会社 テーマ1「第4世代AMD EPYC™ プロセッサー (Genoa) の概要」
PCCC22:日本AMD株式会社 テーマ1「第4世代AMD EPYC™ プロセッサー (Genoa) の概要」PCCC22:日本AMD株式会社 テーマ1「第4世代AMD EPYC™ プロセッサー (Genoa) の概要」
PCCC22:日本AMD株式会社 テーマ1「第4世代AMD EPYC™ プロセッサー (Genoa) の概要」PC Cluster Consortium
 
PCCC22:富士通株式会社 テーマ3「量子シミュレータ」
PCCC22:富士通株式会社 テーマ3「量子シミュレータ」PCCC22:富士通株式会社 テーマ3「量子シミュレータ」
PCCC22:富士通株式会社 テーマ3「量子シミュレータ」PC Cluster Consortium
 
PCCC22:富士通株式会社 テーマ1「Fujitsu Computing as a Service (CaaS)」
PCCC22:富士通株式会社 テーマ1「Fujitsu Computing as a Service (CaaS)」PCCC22:富士通株式会社 テーマ1「Fujitsu Computing as a Service (CaaS)」
PCCC22:富士通株式会社 テーマ1「Fujitsu Computing as a Service (CaaS)」PC Cluster Consortium
 
PCCC22:日本電気株式会社 テーマ1「AI/ビッグデータ分析に最適なプラットフォーム NECのベクトルプロセッサ『SX-Aurora TSUBASA』」
PCCC22:日本電気株式会社 テーマ1「AI/ビッグデータ分析に最適なプラットフォーム NECのベクトルプロセッサ『SX-Aurora TSUBASA』」PCCC22:日本電気株式会社 テーマ1「AI/ビッグデータ分析に最適なプラットフォーム NECのベクトルプロセッサ『SX-Aurora TSUBASA』」
PCCC22:日本電気株式会社 テーマ1「AI/ビッグデータ分析に最適なプラットフォーム NECのベクトルプロセッサ『SX-Aurora TSUBASA』」PC Cluster Consortium
 
PCCC22:東京大学情報基盤センター 「Society5.0の実現を目指す「計算・データ・学習」の融合による革新的スーパーコンピューティング」
PCCC22:東京大学情報基盤センター 「Society5.0の実現を目指す「計算・データ・学習」の融合による革新的スーパーコンピューティング」PCCC22:東京大学情報基盤センター 「Society5.0の実現を目指す「計算・データ・学習」の融合による革新的スーパーコンピューティング」
PCCC22:東京大学情報基盤センター 「Society5.0の実現を目指す「計算・データ・学習」の融合による革新的スーパーコンピューティング」PC Cluster Consortium
 
PCCC22:日本マイクロソフト株式会社 テーマ2「HPC on Azureのお客様事例」03
PCCC22:日本マイクロソフト株式会社 テーマ2「HPC on Azureのお客様事例」03PCCC22:日本マイクロソフト株式会社 テーマ2「HPC on Azureのお客様事例」03
PCCC22:日本マイクロソフト株式会社 テーマ2「HPC on Azureのお客様事例」03PC Cluster Consortium
 
PCCC22:日本マイクロソフト株式会社 テーマ2「HPC on Azureのお客様事例」01
PCCC22:日本マイクロソフト株式会社 テーマ2「HPC on Azureのお客様事例」01PCCC22:日本マイクロソフト株式会社 テーマ2「HPC on Azureのお客様事例」01
PCCC22:日本マイクロソフト株式会社 テーマ2「HPC on Azureのお客様事例」01PC Cluster Consortium
 
PCCC22:日本マイクロソフト株式会社 テーマ1「HPC on Microsoft Azure」
PCCC22:日本マイクロソフト株式会社 テーマ1「HPC on Microsoft Azure」PCCC22:日本マイクロソフト株式会社 テーマ1「HPC on Microsoft Azure」
PCCC22:日本マイクロソフト株式会社 テーマ1「HPC on Microsoft Azure」PC Cluster Consortium
 
PCCC22:インテル株式会社 テーマ3「インテル® oneAPI ツールキット 最新情報のご紹介」
PCCC22:インテル株式会社 テーマ3「インテル® oneAPI ツールキット 最新情報のご紹介」PCCC22:インテル株式会社 テーマ3「インテル® oneAPI ツールキット 最新情報のご紹介」
PCCC22:インテル株式会社 テーマ3「インテル® oneAPI ツールキット 最新情報のご紹介」PC Cluster Consortium
 

More from PC Cluster Consortium (20)

PCCC23:SCSK株式会社 テーマ1「『Azure OpenAI Service』導入支援サービス」
PCCC23:SCSK株式会社 テーマ1「『Azure OpenAI Service』導入支援サービス」PCCC23:SCSK株式会社 テーマ1「『Azure OpenAI Service』導入支援サービス」
PCCC23:SCSK株式会社 テーマ1「『Azure OpenAI Service』導入支援サービス」
 
PCCC23:日本AMD株式会社 テーマ2「AMD EPYC™ プロセッサーを用いたAIソリューション」
PCCC23:日本AMD株式会社 テーマ2「AMD EPYC™ プロセッサーを用いたAIソリューション」PCCC23:日本AMD株式会社 テーマ2「AMD EPYC™ プロセッサーを用いたAIソリューション」
PCCC23:日本AMD株式会社 テーマ2「AMD EPYC™ プロセッサーを用いたAIソリューション」
 
PCCC23:富士通株式会社 テーマ1「次世代高性能・省電力プロセッサ『FUJITSU-MONAKA』」
PCCC23:富士通株式会社 テーマ1「次世代高性能・省電力プロセッサ『FUJITSU-MONAKA』」PCCC23:富士通株式会社 テーマ1「次世代高性能・省電力プロセッサ『FUJITSU-MONAKA』」
PCCC23:富士通株式会社 テーマ1「次世代高性能・省電力プロセッサ『FUJITSU-MONAKA』」
 
PCCC23:東京大学情報基盤センター 「Society5.0の実現を目指す『計算・データ・学習』の融合による革新的スーパーコンピューティング」
PCCC23:東京大学情報基盤センター 「Society5.0の実現を目指す『計算・データ・学習』の融合による革新的スーパーコンピューティング」PCCC23:東京大学情報基盤センター 「Society5.0の実現を目指す『計算・データ・学習』の融合による革新的スーパーコンピューティング」
PCCC23:東京大学情報基盤センター 「Society5.0の実現を目指す『計算・データ・学習』の融合による革新的スーパーコンピューティング」
 
PCCC23:日本AMD株式会社 テーマ1「AMD Instinct™ アクセラレーターの概要」
PCCC23:日本AMD株式会社 テーマ1「AMD Instinct™ アクセラレーターの概要」PCCC23:日本AMD株式会社 テーマ1「AMD Instinct™ アクセラレーターの概要」
PCCC23:日本AMD株式会社 テーマ1「AMD Instinct™ アクセラレーターの概要」
 
PCCC23:富士通株式会社 テーマ3「Fujitsu Computing as a Service (CaaS)」
PCCC23:富士通株式会社 テーマ3「Fujitsu Computing as a Service (CaaS)」PCCC23:富士通株式会社 テーマ3「Fujitsu Computing as a Service (CaaS)」
PCCC23:富士通株式会社 テーマ3「Fujitsu Computing as a Service (CaaS)」
 
PCCC23:日本オラクル株式会社 テーマ1「OCIのHPC基盤技術と生成AI」
PCCC23:日本オラクル株式会社 テーマ1「OCIのHPC基盤技術と生成AI」PCCC23:日本オラクル株式会社 テーマ1「OCIのHPC基盤技術と生成AI」
PCCC23:日本オラクル株式会社 テーマ1「OCIのHPC基盤技術と生成AI」
 
PCCC23:筑波大学計算科学研究センター テーマ1「スーパーコンピュータCygnus / Pegasus」
PCCC23:筑波大学計算科学研究センター テーマ1「スーパーコンピュータCygnus / Pegasus」PCCC23:筑波大学計算科学研究センター テーマ1「スーパーコンピュータCygnus / Pegasus」
PCCC23:筑波大学計算科学研究センター テーマ1「スーパーコンピュータCygnus / Pegasus」
 
PCCC23:Pacific Teck Japan テーマ1「データがデータを生む時代に即したストレージソリューション」
PCCC23:Pacific Teck Japan テーマ1「データがデータを生む時代に即したストレージソリューション」PCCC23:Pacific Teck Japan テーマ1「データがデータを生む時代に即したストレージソリューション」
PCCC23:Pacific Teck Japan テーマ1「データがデータを生む時代に即したストレージソリューション」
 
PCCC23:株式会社計算科学 テーマ1「VRシミュレーションシステム」
PCCC23:株式会社計算科学 テーマ1「VRシミュレーションシステム」PCCC23:株式会社計算科学 テーマ1「VRシミュレーションシステム」
PCCC23:株式会社計算科学 テーマ1「VRシミュレーションシステム」
 
PCCC22:株式会社アックス テーマ1「俺ASICとロボットと論理推論AI」
PCCC22:株式会社アックス テーマ1「俺ASICとロボットと論理推論AI」PCCC22:株式会社アックス テーマ1「俺ASICとロボットと論理推論AI」
PCCC22:株式会社アックス テーマ1「俺ASICとロボットと論理推論AI」
 
PCCC22:日本AMD株式会社 テーマ1「第4世代AMD EPYC™ プロセッサー (Genoa) の概要」
PCCC22:日本AMD株式会社 テーマ1「第4世代AMD EPYC™ プロセッサー (Genoa) の概要」PCCC22:日本AMD株式会社 テーマ1「第4世代AMD EPYC™ プロセッサー (Genoa) の概要」
PCCC22:日本AMD株式会社 テーマ1「第4世代AMD EPYC™ プロセッサー (Genoa) の概要」
 
PCCC22:富士通株式会社 テーマ3「量子シミュレータ」
PCCC22:富士通株式会社 テーマ3「量子シミュレータ」PCCC22:富士通株式会社 テーマ3「量子シミュレータ」
PCCC22:富士通株式会社 テーマ3「量子シミュレータ」
 
PCCC22:富士通株式会社 テーマ1「Fujitsu Computing as a Service (CaaS)」
PCCC22:富士通株式会社 テーマ1「Fujitsu Computing as a Service (CaaS)」PCCC22:富士通株式会社 テーマ1「Fujitsu Computing as a Service (CaaS)」
PCCC22:富士通株式会社 テーマ1「Fujitsu Computing as a Service (CaaS)」
 
PCCC22:日本電気株式会社 テーマ1「AI/ビッグデータ分析に最適なプラットフォーム NECのベクトルプロセッサ『SX-Aurora TSUBASA』」
PCCC22:日本電気株式会社 テーマ1「AI/ビッグデータ分析に最適なプラットフォーム NECのベクトルプロセッサ『SX-Aurora TSUBASA』」PCCC22:日本電気株式会社 テーマ1「AI/ビッグデータ分析に最適なプラットフォーム NECのベクトルプロセッサ『SX-Aurora TSUBASA』」
PCCC22:日本電気株式会社 テーマ1「AI/ビッグデータ分析に最適なプラットフォーム NECのベクトルプロセッサ『SX-Aurora TSUBASA』」
 
PCCC22:東京大学情報基盤センター 「Society5.0の実現を目指す「計算・データ・学習」の融合による革新的スーパーコンピューティング」
PCCC22:東京大学情報基盤センター 「Society5.0の実現を目指す「計算・データ・学習」の融合による革新的スーパーコンピューティング」PCCC22:東京大学情報基盤センター 「Society5.0の実現を目指す「計算・データ・学習」の融合による革新的スーパーコンピューティング」
PCCC22:東京大学情報基盤センター 「Society5.0の実現を目指す「計算・データ・学習」の融合による革新的スーパーコンピューティング」
 
PCCC22:日本マイクロソフト株式会社 テーマ2「HPC on Azureのお客様事例」03
PCCC22:日本マイクロソフト株式会社 テーマ2「HPC on Azureのお客様事例」03PCCC22:日本マイクロソフト株式会社 テーマ2「HPC on Azureのお客様事例」03
PCCC22:日本マイクロソフト株式会社 テーマ2「HPC on Azureのお客様事例」03
 
PCCC22:日本マイクロソフト株式会社 テーマ2「HPC on Azureのお客様事例」01
PCCC22:日本マイクロソフト株式会社 テーマ2「HPC on Azureのお客様事例」01PCCC22:日本マイクロソフト株式会社 テーマ2「HPC on Azureのお客様事例」01
PCCC22:日本マイクロソフト株式会社 テーマ2「HPC on Azureのお客様事例」01
 
PCCC22:日本マイクロソフト株式会社 テーマ1「HPC on Microsoft Azure」
PCCC22:日本マイクロソフト株式会社 テーマ1「HPC on Microsoft Azure」PCCC22:日本マイクロソフト株式会社 テーマ1「HPC on Microsoft Azure」
PCCC22:日本マイクロソフト株式会社 テーマ1「HPC on Microsoft Azure」
 
PCCC22:インテル株式会社 テーマ3「インテル® oneAPI ツールキット 最新情報のご紹介」
PCCC22:インテル株式会社 テーマ3「インテル® oneAPI ツールキット 最新情報のご紹介」PCCC22:インテル株式会社 テーマ3「インテル® oneAPI ツールキット 最新情報のご紹介」
PCCC22:インテル株式会社 テーマ3「インテル® oneAPI ツールキット 最新情報のご紹介」
 

Recently uploaded

New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Scott Andery
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI AgeCprime
 
Exploring ChatGPT Prompt Hacks To Maximally Optimise Your Queries
Exploring ChatGPT Prompt Hacks To Maximally Optimise Your QueriesExploring ChatGPT Prompt Hacks To Maximally Optimise Your Queries
Exploring ChatGPT Prompt Hacks To Maximally Optimise Your QueriesSanjay Willie
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterMydbops
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demoHarshalMandlekar2
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DaySri Ambati
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 

Recently uploaded (20)

New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI Age
 
Exploring ChatGPT Prompt Hacks To Maximally Optimise Your Queries
Exploring ChatGPT Prompt Hacks To Maximally Optimise Your QueriesExploring ChatGPT Prompt Hacks To Maximally Optimise Your Queries
Exploring ChatGPT Prompt Hacks To Maximally Optimise Your Queries
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL Router
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demo
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 

Tsukuba Astrophysics Research Black Holes Galaxy Evolution

  • 1.
  • 2.
  • 3. University of Tsukuba | Center for Computational Sciences https://www.ccs.tsukuba.ac.jp/ Division of Astrophysics Black-hole physics We apply fundamental physics to investigate a wide variety of astrophysical phenomena, including the birth and evolution of the first stars and galaxies in the universe, the characteristics of the light they emit, the formation and evolution of galaxies, galaxy clusters, and other large-scale structures, the formation and evolution of black holes and their jets and radiation, the formation of planets, and the origin of life in the universe. We are also researching how technology developed in this field may find applications in medicine. In the accretion disk, where matter accumulates owing to the strong gravity of the black hole, powerful radiation and jets are generated through the release of gravitational energy. We are investigating high-energy astrophysical phenomena around black holes and the radiation from black-hole phenomena through general-relativistic radiation-magnetohydrodynamic simulations. Galaxy formation In the early universe, massive galaxies and supermassive black holes may have formed through frequent merging of galaxies and rapid gas accretion in primordial galaxy clusters. We are studying physical phenomena occurring in primordial galaxy clusters through cosmological radiation-hydrodynamic simulations. Fig. 7 Radiative transfer in live tissue irradiated by pulsed light Fig.1 Left: Black hole accretion disks and jets in general relativistic radiative magnetohydrodynamic simulations. Right: Black hole shadow reproduced by general relativistic radiative transfer calculations Fig.5 Upper left: Large-scale matter distribution in the universe. Upper right: Structure of gas near the central galaxy in a proto-galaxy cluster. Bottom: From left to right, galactic gas, heavy elements, and stellar surface density. Fig. 4 A galaxy collision that occurred in the Andromeda Galaxy approximately 1 billion years ago Fig. 6 Left: Large-scale structure with neutrinos. Right: Large-scale structure without neutrinos. Chief: Ohsuga Ken, Professor, Ph.D. Neutrinos and the formation of large-scale structures in the universe Fig. 3 Interaction between interstellar gas and jets Supermassive black holes and galaxy evolution Every galaxy harbors a supermassive black hole at its center that grows by swallowing stars and interstellar matter and simultaneously launches powerful jets into the galaxy that blow away interstellar gas and regulates star formation and black hole growth. We are investigating these complex cycles in galaxy evolution through relativistic magnetohydrodynamic simulations. Galaxy evolution and dark matter Dark matter is known to play an important role in the evolution of galaxies, but its nature remains shrouded in mystery, and many contradictions within existing theories persist. We are investigating the nature of dark matter haloes by studying galaxy collisions. We have conducted the world’ s first high-precision Vlasov simulations of the large-scale structure in the universe that include neutrinos and that outperform conventional N-body simulation. The simulations can be used in conjunction with future observational projects of large-scale galaxy survey to constrain the neutrino mass. Applications of astrophysical techniques in medicine Diffuse optical tomography (DOT) using near-infrared light in the wavelength range of 700-1000 nm has been garnering attention as a new medical diagnostic method because it is non-invasive and does not expose the patient to radiation. We are studying light propagation in living organisms by applying high-precision techniques for radiative transfer calculation developed in astrophysics to DOT. Fig. 2 Cloud dispersal by radiative feedback in star cluster formation Star cluster formation Most stars are born as members of star clusters in the Universe. The physical processes in star-cluster formation determine the properties of the stars. We are studying star-cluster formation and cloud dispersal by radiative feedback using radiation hydrodynamics simulations.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9. University of Tsukuba | Center for Computational Sciences https://www.ccs.tsukuba.ac.jp/ Division of High Performance Computing Systems High-performance, massively parallel numerical algorithms The High-Performance Computing Systems Division conducts research and development of various hardware and software for High Performance Computing (HPC) to satisfy the demand for ultrafast and high-capacity computing to promote cutting-edge computational science. In collaboration with the Center’ s teams in domain science fields, we aim to provide the ideal HPC systems for real-world problems. Research targets a variety of fields, such as high-performance computer architecture, parallel programming languages, large-scale parallel numerical calculation algorithms and libraries, computational acceleration systems such as Graphics Processing Units (GPUs), Field Programmable Gate Array (FPGA), large-scale distributed storage systems, and grid/cloud environments. Chief: BOKU Taisuke, Professor, Ph.D., Director of CCS Fig. 1 Performance of Parallel 3-D FFTs on Oakforest-PACS (2048 nodes) N=256×512×512×MPI processes Fig. 2 solution time of saddle point problems We are developing high-performance, massively parallel numerical algorithms.One of these is the Fast Fourier Transform (FFT) library. FFTE -- an open-source high-performance parallel FFT library developed by our division -- has an auto-tuning mechanism and is suitable for a wide range of systems (from PC clusters to massively parallel systems).It includes real, complex, mixed-radix and parallel transform. We are also developing large-scale linear computation algorithms. We constructed a method using the matrix structure of saddle point problems that enables faster solutions than existing methods. GPU computing GPUs were originally designed as accelerators with a large number of arithmetic units for image processing, but they are also able to perform general-purpose calculations. GPUs have been used to accelerate applications but are now used to perform larger-scale calculations, often with longer execution times. In shared systems such as supercomputers, there are execution time limits, but time limits can be exceeded using a technique called checkpointing, which is used to save the application state to a file in the middle of an execution so that the application can be resumed later. Because the checkpointing technique for applications using only the central processing unit (CPU) is well established, we extended it to the GPU. The GPU has its own memory separate from the CPU, and GPU applications control the GPU with dedicated application programming interface (API) functions from the CPU. We monitor all the calls to these API functions, collect the necessary information to resume execution, and store it in the CPU memory. Additionally, in the pre-processing of saving the checkpoint, the data in the GPU’ s memory is collected on the CPU side so that they can be saved together in a file. Because we cannot increase the execution time to enable checkpoints, we must minimize the overhead of monitoring the API. It is important to minimize the number of API functions to be monitored and to save only the data that are needed.
  • 10. University of Tsukuba | Center for Computational Sciences https://www.ccs.tsukuba.ac.jp/ Project Office for Exascale Computing System Research and Development Head: BOKU Taisuke, Professor, Ph.D., Director of CCS Developing advanced fundamental technologies for ExaFLOPS and beyond A parallel supercomputer system’ s peak computing performance is expressed as follows: node (processor) performance × number of nodes. Thus far, performance improvement has been pursued by increasing the number of nodes, but owing to problems such as power consumption and a high failure rate, simply increasing the number of nodes as a means of performance improvement is approaching its limit. The world’ s highest performing supercomputers are reaching the exascale, but to reach this level and beyond, it is necessary to establish fault-tolerant technology for hundreds of thousands to millions of nodes and to enhance the computing performance of individual nodes to the 100TFLOPS level. To achieve the latter, a computation acceleration mechanism is promising, which would enable strong scaling of the simulation time per step. In the Next Generation Computing Systems Laboratory, we are conducting research on fundamental technologies for the next generation of HPC systems based on the concept of multi-hybrid accelerated supercomputing (MHAS). FPGAs play a central role in this research and are applied to the integration of computation and communication against the backdrop of dramatic improvements in the computation and communication performance in these devices. Since 2017, we have been developing and expanding PPX (Pre-PACS-X), a mini-cluster for demonstration experiments based on the idea of using FPGAs as complementary tools to achieve the ideal acceleration of computation for applications where the conventional combination of CPU and GPU is insufficient. Cygnus—the 10th generation supercomputer of the PACS series—was developed as a practical application system and began operating in the 2019 academic year. The following is an introduction to the main technologies being developed here. In collaboration with the Division of High-Performance Computing Systems and Division of Astrophysics, we developed the ARGOT code, which includes the radiation transport problem in the simulation of early-universe object formation, so that it would run fast on a tightly coupled GPU and FGPA platform. The entire ARGOT code can now be run on a 1-node GPU+FPGA co-computation, which is up to 17 times faster than a GPU-only computation. Additionally, we developed a DMA engine that enables high-speed communication between the GPU and FPGAs without using the CPU. Furthermore, through joint research with the Division of Global Environmental Science and the Division of Quantum Condensed Matter Physics, we are working on GPU/FPGA acceleration of application codes being independently developed at JCAHPC. We developed CIRCUS (Communication Integrated Reconfigurable CompUting System) as a communication framework that facilitates the use of FPGA-to-FPGA networks with physical performance levels of 100 Gbps. It is easy to use at the user level and achieved high-performance communication (90 Gbps—90% of the theoretical peak performance of 100 Gbps) on the Cygnus supercomputer. Additionally, we applied this framework to the FPGA offload portion of the ARGOT code and confirmed that parallel FPGA processing can be performed smoothly. In collaboration with the Division of High-Performance Computing Systems and Division of Astrophysics, we developed the ARGOT code, which includes the radiation transport problem in the simulation of early-universe object formation, so that it would run fast on a tightly coupled GPU and FGPA platform. The entire ARGOT code can now be run on a 1-node GPU+FPGA co-computation, which is up to 17 times faster than a GPU-only computation. Additionally, we developed a DMA engine that enables high-speed communication between the GPU and FPGAs Developing applications that jointly use FPGAs and the GPU OpenCL interface for FPGA-to-FPGA high-speed optical networks The figure below shows an overview of the multi-node parallelization of the GPU-FPGA ARGOT code. While each node of Cygnus is equipped with multiple GPUs and FPGAs, current version of code applies single GPU and FPGA for each MPI process to be assigned to a node. The ART method runs on the FPGA while the rest of the code including ARGOT method runs on the GPU. Two communication channels among multiple nodes are used where GPUs and CPUs communicate via MPI and inter-FPGA communication is taken by CIRCUS, to create a large computation and communication pipeline over multiple FPGAs. The original ARGOT code for GPU is written by CUDA + MPI, and we enhanced it with OpenCL programing for FPGA with CIRCUS feature. Parallelized ARGOT code on GPUs & FPGAs Unified GPU–FPGA programming environment Left: GPU+FPGA co-computation performance in early-universe object-formation simulation Right: Overview parallelized ARGOT code on GPUs and FPGAs cooperability