You Only Look One-level Featureの解説と見せかけた物体検出のよもやま話Yusuke Uchida
第7回全日本コンピュータビジョン勉強会「CVPR2021読み会」(前編)の発表資料です
https://kantocv.connpass.com/event/216701/
You Only Look One-level Featureの解説と、YOLO系の雑談や、物体検出における関連する手法等を広く説明しています
[DL輪読会]Neural Radiance Flow for 4D View Synthesis and Video Processing (NeRF...Deep Learning JP
Neural Radiance Flow (NeRFlow) is a method that extends Neural Radiance Fields (NeRF) to model dynamic scenes from video data. NeRFlow simultaneously learns two fields - a radiance field to reconstruct images like NeRF, and a flow field to model how points in space move over time using optical flow. This allows it to generate novel views from a new time point. The model is trained end-to-end by minimizing losses for color reconstruction from volume rendering and optical flow reconstruction. However, the method requires training separate models for each scene and does not generalize to unknown scenes.
You Only Look One-level Featureの解説と見せかけた物体検出のよもやま話Yusuke Uchida
第7回全日本コンピュータビジョン勉強会「CVPR2021読み会」(前編)の発表資料です
https://kantocv.connpass.com/event/216701/
You Only Look One-level Featureの解説と、YOLO系の雑談や、物体検出における関連する手法等を広く説明しています
[DL輪読会]Neural Radiance Flow for 4D View Synthesis and Video Processing (NeRF...Deep Learning JP
Neural Radiance Flow (NeRFlow) is a method that extends Neural Radiance Fields (NeRF) to model dynamic scenes from video data. NeRFlow simultaneously learns two fields - a radiance field to reconstruct images like NeRF, and a flow field to model how points in space move over time using optical flow. This allows it to generate novel views from a new time point. The model is trained end-to-end by minimizing losses for color reconstruction from volume rendering and optical flow reconstruction. However, the method requires training separate models for each scene and does not generalize to unknown scenes.
A System for Practicing Formations in Dance Performance Supported by Self-Pro...Shuhei Tsuchida
Collapsed formation in a group dance will greatly reduce the quality of the performance even if the dance in the group is synchronized with music. Therefore, learning the formation of a dance in a group is as important as learning its choreography. However, if someone cannot participate in practice, it is difficult for the rest of the members to gain a sense of the proper formation in practice. We propose a practice-support system for performing the formation smoothly using a self-propelled screen even if there is no dance partner. We developed a prototype of the system and investigated whether a sense of presence provided by both methods of practicing formations was close to the sense we really obtain when we dance with humans. The result verified that the sense of dancing with a projected video was closest to the sense of dancing with a dancer, and the trajectory information from dancing with a self-propelled robot was close to the trajectory information from dancing with a dancer. Practicing in situations similar to real ones is able to be done by combining these two methods. Furthermore, we investigated whether the self-propelled screen obtained the advantages of dancing with both methods and found that it only obtained advantages of dancing with projected video.
A Dance Performance Environment in which Performers Dance with Multiple Robot...Shuhei Tsuchida
In recent years, as robotics technology progresses, various mobile robots have been developed to dance with humans. However, up until now there have been no system for interactively creating a performance using multiple mobile robots. Therefore, performance using multiple mobile robots is still difficult. In this study, we construct a mechanism by which a performer can interactively create a performance while he/she considers the correspondence between his/her motion and the mobile robots' movement and light. Specifically, we developed a system that enables performers to freely create performances with multiple robotics balls that can move omunidirectionnally and have full color LEDs. Performers can design both the movements of the robotic balls and the colors of the LEDs. To evaluate the effectiveness of the system, we had four performers use the system to create and demonstrate performances. Moreover, we confirmed that the system performed reliably in a real environment.
Mimebot: Sphere-shaped Mobile Robot Imitating Rotational Movement (MoMM2016 p...Shuhei Tsuchida
When designing a performance involving people and mobile robots, we must consider the required functions and shape of the robot. However, it can be difficult to account for all of the requirements. In this paper, we discuss a mobile robot in the shape of a ball that is used in theatrical performances. Such a spherical robot should be agile and be able to roll like a ball. However, it is difficult to create a robot with all of these characteristics. Instead, we propose a mobile robot that can give the audience the optical illusion of the unique movements of a sphere by mounting a spherical LED display on a high-agility wheeled robot. The results of an experiment using a prototype indicate that this sort of robot can broaden the range of possible performances by giving the optical illusion of being a rolling sphere.
Automatic System for Editing Dance Videos Recorded Using Multiple CamerasShuhei Tsuchida
As social media has matured, uploading video content has increased. Multiple videos of physical performances, such as dance, are difficult to integrate into high-quality videos without knowledge of video-editing principles. In this study, we present a system that automatically edits dance-performance videos taken from multiple viewpoints into a more attractive and sophisticated dance video. Our system can crop the frame of each camera appropriately by using the performer’s behavior and skeleton information. The system determines the camera switches and cut lengths following a probabilistic model of general cinematography guidelines and of knowledge extracted from expert experience. In this study, our system automatically edited a dance video of four performers taken from multiple viewpoints, and ten video-production experts evaluated the generated video. As a result of a comparison of another automatic editing system, our system tended to be performed better.
AIST Dance Video Database: Multi-Genre, Multi-Dancer, and Multi-Camera Databa...Shuhei Tsuchida
Database
https://aistdancedb.ongaaccel.jp/
AIST Dance Video Database (AIST Dance DB) is a shared database containing original street dance videos with copyright-cleared dance music. This is the first large-scale shared database focusing on street dances to promote academic research regarding Dance Information Processing. The AIST Dance DB will foster a variety of new tasks such as
Dance-motion genre classification
Dancer identification
Dance-technique estimation