When designing a performance involving people and mobile robots, we must consider the required functions and shape of the robot. However, it can be difficult to account for all of the requirements. In this paper, we discuss a mobile robot in the shape of a ball that is used in theatrical performances. Such a spherical robot should be agile and be able to roll like a ball. However, it is difficult to create a robot with all of these characteristics. Instead, we propose a mobile robot that can give the audience the optical illusion of the unique movements of a sphere by mounting a spherical LED display on a high-agility wheeled robot. The results of an experiment using a prototype indicate that this sort of robot can broaden the range of possible performances by giving the optical illusion of being a rolling sphere.
A Dance Performance Environment in which Performers Dance with Multiple Robot...Shuhei Tsuchida
In recent years, as robotics technology progresses, various mobile robots have been developed to dance with humans. However, up until now there have been no system for interactively creating a performance using multiple mobile robots. Therefore, performance using multiple mobile robots is still difficult. In this study, we construct a mechanism by which a performer can interactively create a performance while he/she considers the correspondence between his/her motion and the mobile robots' movement and light. Specifically, we developed a system that enables performers to freely create performances with multiple robotics balls that can move omunidirectionnally and have full color LEDs. Performers can design both the movements of the robotic balls and the colors of the LEDs. To evaluate the effectiveness of the system, we had four performers use the system to create and demonstrate performances. Moreover, we confirmed that the system performed reliably in a real environment.
This document discusses a student project to simulate human gait during stair climbing and batting using video analysis. The project involved identifying joint positions from marker coordinates in video frames and incorporating these into a Python simulation. Key phases of stair climbing and batting motions were modeled. Video processing involved importing footage, selecting frames, and finding marker positions to track major joints. The results provided insight into gait simulation using real-time data and exposure to video analysis software for tracking joint motion over multiple frames.
This document discusses an evolutionary design method for bi-ped walking robots. Traditionally, robot structure and control systems are designed separately, but this leads to many trials. The proposed method evolves both morphology and control simultaneously. It uses a simple 11-link model in the first step to develop basic walking patterns. In the second step, a more detailed 6 module model is used considering characteristics of real servo modules to obtain an optimized design. Genetic algorithms are used to evolve controller neurons and joint trajectories. Dynamic simulations evaluate robots for 5 seconds to identify designs that can walk stably. The method is found to co-evolve detailed robot structures and walking patterns suitable for real robots.
This document describes research on a hexapod robot called COMET-IV that uses a laser range finder (LRF) to assist with force-based walking. It discusses the robot's configuration including its dimensions, sensors, and control system. An algorithm is proposed that uses the LRF to build a 3D grid map of obstacles in the environment and dynamically adjusts the robot's stable walking range based on force feedback. Experiments show the vision-assisted approach provides more stable walking behavior compared to no vision input. Future work is planned to further integrate sensor data and control leg movements for varied terrain.
This document describes a senior design project to develop a library of motion functions for a humanoid robot to play soccer. The project involves reviewing literature on biped walking motion, the robot hardware manual, and previous teams' specs. The design options considered are using different serial interface libraries. The implemented solution and testing results are discussed. The library allows the robot to walk forward and backward, turn, and kick a ball.
The document is a master's thesis from February 10, 2009 on a biologically inspired learning method for biped rhythmic walking using static freezing and freeing. It proposes a learning method where degrees of freedom in a robot are temporarily frozen during early motion learning and then freed as learning progresses. This process is aimed at reducing difficulties in learning and converging on a solution. The method is examined through a numerical computer simulation of a biped walking robot learning to walk rhythmically. Figures 1 and 2 show snapshots of the robot walking with frozen and freed degrees of freedom, respectively. The research achievement section lists a conference paper by the author and others on learning rhythmic walking in a biped robot based on demonstrated motions.
This document outlines the daily agenda and reflections for a 19-day robotics course using Lego Mindstorms NXT kits. Each day covers topics like defining robots and their parts, building and programming a tri-bot, introducing sensors and programming concepts, and culminating in a final project showcase for parents. Students reflect on what they are learning and how it applies to real-world uses of robots. The course aims to teach core robotics and programming skills through hands-on building and coding challenges.
This document outlines the daily agenda and reflections for a 19-day robotics workshop using Lego Mindstorms NXT kits. Each day covers different robotics concepts like definitions of robots, sensors, processors, actuators and programming. Students build a tri-bot robot and learn programming concepts like touch sensors, sound blocks, loops and ultrasonic sensors. The agenda culminates in students designing final projects to demonstrate their learning, and a showcase for parents on the last day. Key topics covered include types of movement, writing directions, debugging programs and working as a team.
A Dance Performance Environment in which Performers Dance with Multiple Robot...Shuhei Tsuchida
In recent years, as robotics technology progresses, various mobile robots have been developed to dance with humans. However, up until now there have been no system for interactively creating a performance using multiple mobile robots. Therefore, performance using multiple mobile robots is still difficult. In this study, we construct a mechanism by which a performer can interactively create a performance while he/she considers the correspondence between his/her motion and the mobile robots' movement and light. Specifically, we developed a system that enables performers to freely create performances with multiple robotics balls that can move omunidirectionnally and have full color LEDs. Performers can design both the movements of the robotic balls and the colors of the LEDs. To evaluate the effectiveness of the system, we had four performers use the system to create and demonstrate performances. Moreover, we confirmed that the system performed reliably in a real environment.
This document discusses a student project to simulate human gait during stair climbing and batting using video analysis. The project involved identifying joint positions from marker coordinates in video frames and incorporating these into a Python simulation. Key phases of stair climbing and batting motions were modeled. Video processing involved importing footage, selecting frames, and finding marker positions to track major joints. The results provided insight into gait simulation using real-time data and exposure to video analysis software for tracking joint motion over multiple frames.
This document discusses an evolutionary design method for bi-ped walking robots. Traditionally, robot structure and control systems are designed separately, but this leads to many trials. The proposed method evolves both morphology and control simultaneously. It uses a simple 11-link model in the first step to develop basic walking patterns. In the second step, a more detailed 6 module model is used considering characteristics of real servo modules to obtain an optimized design. Genetic algorithms are used to evolve controller neurons and joint trajectories. Dynamic simulations evaluate robots for 5 seconds to identify designs that can walk stably. The method is found to co-evolve detailed robot structures and walking patterns suitable for real robots.
This document describes research on a hexapod robot called COMET-IV that uses a laser range finder (LRF) to assist with force-based walking. It discusses the robot's configuration including its dimensions, sensors, and control system. An algorithm is proposed that uses the LRF to build a 3D grid map of obstacles in the environment and dynamically adjusts the robot's stable walking range based on force feedback. Experiments show the vision-assisted approach provides more stable walking behavior compared to no vision input. Future work is planned to further integrate sensor data and control leg movements for varied terrain.
This document describes a senior design project to develop a library of motion functions for a humanoid robot to play soccer. The project involves reviewing literature on biped walking motion, the robot hardware manual, and previous teams' specs. The design options considered are using different serial interface libraries. The implemented solution and testing results are discussed. The library allows the robot to walk forward and backward, turn, and kick a ball.
The document is a master's thesis from February 10, 2009 on a biologically inspired learning method for biped rhythmic walking using static freezing and freeing. It proposes a learning method where degrees of freedom in a robot are temporarily frozen during early motion learning and then freed as learning progresses. This process is aimed at reducing difficulties in learning and converging on a solution. The method is examined through a numerical computer simulation of a biped walking robot learning to walk rhythmically. Figures 1 and 2 show snapshots of the robot walking with frozen and freed degrees of freedom, respectively. The research achievement section lists a conference paper by the author and others on learning rhythmic walking in a biped robot based on demonstrated motions.
This document outlines the daily agenda and reflections for a 19-day robotics course using Lego Mindstorms NXT kits. Each day covers topics like defining robots and their parts, building and programming a tri-bot, introducing sensors and programming concepts, and culminating in a final project showcase for parents. Students reflect on what they are learning and how it applies to real-world uses of robots. The course aims to teach core robotics and programming skills through hands-on building and coding challenges.
This document outlines the daily agenda and reflections for a 19-day robotics workshop using Lego Mindstorms NXT kits. Each day covers different robotics concepts like definitions of robots, sensors, processors, actuators and programming. Students build a tri-bot robot and learn programming concepts like touch sensors, sound blocks, loops and ultrasonic sensors. The agenda culminates in students designing final projects to demonstrate their learning, and a showcase for parents on the last day. Key topics covered include types of movement, writing directions, debugging programs and working as a team.
This document outlines the daily agenda and reflections for a 19-day robotics workshop using Lego Mindstorms NXT kits. Each day covers different robotics concepts like defining robots, sensors, processors, actuators and programming basics. Students build a tri-bot robot and learn programming skills. Later activities include using sensors to navigate mazes, following lines and grabbing objects. The final days are spent planning and building a final project robot to demonstrate skills learned. Reflections focus on learning, challenges and goals. A robotics showcase is held on the last day for parents to see the final projects.
This document provides an overview of topics related to bioelectronic systems and biomedical robotics. It lists 10 promising technologies assisting the future of medicine, including health sensors, artificial intelligence, the end of human experiments, augmented reality, and rehabilitation robots. It then discusses what robotics is, defines a robot, and covers various robot classifications. The document outlines the main problems in robotics like forward and inverse kinematics, velocity kinematics, path planning, vision, dynamics, position control, and force control. It provides references for general robotics, biomedical robotics, textbooks, project guides, conferences, and readings. Finally, it shares the syllabus and coursework details for an introduction biomedical robotics lecture course.
Published on Feb. 7, 2018
This slide was presented at Augmented Human 2018.
http://www.sigah.org/AH2018/
Telewheelchair: the Remote Controllable Electric Wheelchair System combined Human and Machine Intelligence
https://dl.acm.org/citation.cfm?id=3174914
【Project page】
http://digitalnature.slis.tsukuba.ac.jp/2017/03/telewheelchair/
【Project movie】
https://www.youtube.com/watch?v=e9bcp0elNFs
【Presenter】
Satoshi Hashizume (橋爪智)
University of Tsukuba,
Digital Nature Group (Yoichi Ochiai)
【Abstract】
Wheelchairs are essential means of transport for the elderly people and the physically challenged. However, wheelchairs need to be accompanied by caregivers. As society ages and the number of care recipients increases, the burden on caregivers is expected to increase. In order to reduce the burden on caregivers, we present Telewheelchair, an electric wheelchair equipped with a remote control function and computational operation assistance function. The caregiver can remotely control the Telewheelchair by means of a head mounted display (HMD). In addition, the proposed system is equipped with a human detection system to stop the wheelchair automatically and avoid collisions. We conducted a user study on the wheelchair in four types of systems and investigated the time taken to achieve tasks. Telewheelchair will enhance geriatric mobility and improve society by combining human intelligence and machine intelligence.
This document outlines a student project to build a pick and place robot that can also detect shapes. The robot will have the abilities to follow a line, pick up and place objects, and detect shapes using IR sensors. It will use a microcontroller and 7 DC motors to move its gripper, wheels, and supporting arms to automate assembly and packaging tasks currently done manually. The project aims to increase production speeds and efficiency for applications in LEGO manufacturing, cement block making, and other assembly industries.
This masters report by Pranav R Shah presents an approach to motion planning that accounts for sensing and motion uncertainty. The report describes a method to generate and evaluate trajectories for a 7-DOF robot arm performing the task of picking up an object and inserting it into a hole. Trajectories are generated using RRT and evaluated by propagating uncertainty in object pose using an EKF. The trajectory with minimum final uncertainty is selected as optimal. Simulation results show the approach can handle 10-30 times more uncertainty than traditional methods and still complete the task successfully.
The document describes the design and fabrication of an intelligent chalk wipe system. The objectives were to design a low-cost, user-friendly blackboard cleaner machine that reduces the time and effort required for cleaning. The methodology involved using a motor to drive the left and right movement of a cross screw threaded rod to move a wiping system vertically along the blackboard. A sensor detects the right end position and signals the motor to return the wiper. The design aims to automate blackboard cleaning and reduce exposure to chalk dust for teachers and students.
H3O 2014 Technical Report ,Faculty of Engineering at Helwan universityHosam Younis
This document summarizes the mechanical, electrical, and design aspects of an ROV (remotely operated vehicle) created by a team of 5 students. It describes the frame design using aluminum angles for structure and pipes for low drag. 8 thrusters provide motion in various directions. Buoyancy and stability were achieved through calculations. Other systems described include the pneumatic gripper, isolated camera mechanisms, and electrical diagrams. The team's future goals are improving control systems and expanding knowledge. They overcame technical challenges like isolation testing and non-technical challenges by increasing publicity.
This document summarizes a presentation by Ninjaneers on an autonomous hexapod robot capable of object recognition. The objectives were to build a hexapod robot that can overcome limitations of other mobile robots through autonomous movement, visual object recognition using an ultrasonic sensor and camera, and a functional gripper to pick up and move objects. The design and implementation process, programming approach using SimpleCV, tests conducted, and budget are also summarized.
This document describes the process of designing and building a line following robot. The author begins with selecting a two-wheeled configuration with independent drive motors and a caster wheel. A functional block diagram is created to outline the robot's hardware and software components. Various line sensing options are considered, with the QRD1114 optical sensor selected due to its small size and low cost. Placement of multiple sensors in an inverted V configuration is discussed as providing more accurate line position information to enable faster speeds around turns. The overall goal is to create an autonomous robot that can navigate complex black line courses on a white background.
This document describes the process of designing and building a line following robot. The author begins with an overview of the project and outlines the mission to develop an autonomous robot that can follow a black line on a white surface. They discuss different propulsion systems and sensor configurations that were considered. The author settles on a two-wheeled design with independent drives and a free-spinning caster wheel. A functional block diagram is presented and key questions are identified regarding the input sensors, processing, output, and data storage. Different sensor options are evaluated and the author selects the low-cost QRD1114 optical sensor. Placement of multiple sensors in a V-shape configuration is discussed as a way to improve tracking performance.
The document describes a group project to design a light sensing robotic vehicle using a PIC18F4520 microcontroller board, stepper motors, sensors, and switches. The task is for the robot to search for a light source within a bounded area, stop within 15cm of the light, and re-negotiate its path when obstacles are encountered. The group divided responsibilities and spent four weeks implementing the hardware, writing software, and integrating everything onto the robotic platform. By the end of the project, the light sensing robot was able to follow light, avoid boundaries, react to obstacles, and stop at the desired distance from the light source, as demonstrated in videos of its operation.
This document outlines the design of a tour guide robot for the Chambers Technology Center building. It includes sections on the system design, hardware and software research, project development, justifications for design choices, test results, conclusions, applications, lessons learned, and future improvements. The robot uses sensors and microcontrollers to navigate autonomously around obstacles while providing verbal descriptions of points of interest on its tour route. Hardware includes ultrasonic sensors for obstacle avoidance, a compass sensor for navigation, and a Raspberry Pi for voice recognition and speech. Software includes algorithms for navigation and the Voicecommand program. The team developed the system over the semester and tested its performance.
Effect of Self-animated Avatars in Virtual Environmentsmukundraj2
This document outlines two experiments that investigate the effect of self-animated avatars in virtual environments. The first pilot study found that the presence of an avatar improved task performance for some users, depending on individual factors like gaming experience. A follow-up experiment aimed to account for individual differences and examine how immersion and task difficulty impact avatar effects. It used an object orientation matching task with variations in avatar presence, immersion level, and task difficulty. Results showed significance in some conditions, answering questions about how avatars influence user performance and experience in virtual worlds. Future work could explore other environments, tasks, and feedback methods.
The document outlines an activity plan for exploring various sensors and the piezoelectric effect. It includes 5 main activities: 1) evading a motion detector, 2) exploring smart sensors, 3) making a microphone, 4) exploring the piezoelectric effect through building a PVDF polymer, and 5) measuring piezoelectric and pyroelectric responses. It also describes a design project to build a coin counter using the piezoelectric properties of PVDF film.
This document presents a computational model and simulation of place cells using a continuous attractor neural network (CANN). The simulation implements a virtual robot and four environments. Various conditions are applied to the simulation to observe the activation patterns produced by the CANN. The results are compared to biological studies on rat place cells. The model demonstrates place cell behavior consistent with biological studies but requires further development to provide full robot navigation capabilities.
This project involved developing a 1/3 scale robotic transport skid that can follow a leader and navigate autonomously when line of sight is lost. The skid met over 50% of needs statements and over 60% of priority 1 needs. It replicates the movement of a full-scale weapon transport skid and can track and follow a leader wearing a green marker. When line of sight is lost, it uses ultrasonic sensors to navigate and avoid obstacles while searching to reacquire the leader. Testing showed it can successfully follow a leader and navigate corridors to find its target when blinded. Potential improvements include enhancing obstacle detection and developing a non-color based target tracking method.
This resume is for Andika Pramanta Yudha. He graduated with a master's degree in electrical engineering with a 3.96 GPA. His areas of expertise include robotics, control systems, embedded systems, programming languages like C++, Java and MATLAB. Notable projects include developing a delta robot, quadcopter, exoskeleton and humanoid robots for competitions. He has received several awards for his robotics projects and research. The resume provides details on the technical aspects of his projects and the tools and methods used.
This document outlines the daily agenda and reflections for a 19-day robotics workshop using Lego Mindstorms NXT kits. Each day covers different robotics concepts like defining robots, sensors, processors, actuators and programming basics. Students build a tri-bot robot and learn programming skills. Later activities include using sensors to navigate mazes, following lines and grabbing objects. The final days are spent planning and building a final project robot to demonstrate skills learned. Reflections focus on learning, challenges and goals. A robotics showcase is held on the last day for parents to see the final projects.
This document provides an overview of topics related to bioelectronic systems and biomedical robotics. It lists 10 promising technologies assisting the future of medicine, including health sensors, artificial intelligence, the end of human experiments, augmented reality, and rehabilitation robots. It then discusses what robotics is, defines a robot, and covers various robot classifications. The document outlines the main problems in robotics like forward and inverse kinematics, velocity kinematics, path planning, vision, dynamics, position control, and force control. It provides references for general robotics, biomedical robotics, textbooks, project guides, conferences, and readings. Finally, it shares the syllabus and coursework details for an introduction biomedical robotics lecture course.
Published on Feb. 7, 2018
This slide was presented at Augmented Human 2018.
http://www.sigah.org/AH2018/
Telewheelchair: the Remote Controllable Electric Wheelchair System combined Human and Machine Intelligence
https://dl.acm.org/citation.cfm?id=3174914
【Project page】
http://digitalnature.slis.tsukuba.ac.jp/2017/03/telewheelchair/
【Project movie】
https://www.youtube.com/watch?v=e9bcp0elNFs
【Presenter】
Satoshi Hashizume (橋爪智)
University of Tsukuba,
Digital Nature Group (Yoichi Ochiai)
【Abstract】
Wheelchairs are essential means of transport for the elderly people and the physically challenged. However, wheelchairs need to be accompanied by caregivers. As society ages and the number of care recipients increases, the burden on caregivers is expected to increase. In order to reduce the burden on caregivers, we present Telewheelchair, an electric wheelchair equipped with a remote control function and computational operation assistance function. The caregiver can remotely control the Telewheelchair by means of a head mounted display (HMD). In addition, the proposed system is equipped with a human detection system to stop the wheelchair automatically and avoid collisions. We conducted a user study on the wheelchair in four types of systems and investigated the time taken to achieve tasks. Telewheelchair will enhance geriatric mobility and improve society by combining human intelligence and machine intelligence.
This document outlines a student project to build a pick and place robot that can also detect shapes. The robot will have the abilities to follow a line, pick up and place objects, and detect shapes using IR sensors. It will use a microcontroller and 7 DC motors to move its gripper, wheels, and supporting arms to automate assembly and packaging tasks currently done manually. The project aims to increase production speeds and efficiency for applications in LEGO manufacturing, cement block making, and other assembly industries.
This masters report by Pranav R Shah presents an approach to motion planning that accounts for sensing and motion uncertainty. The report describes a method to generate and evaluate trajectories for a 7-DOF robot arm performing the task of picking up an object and inserting it into a hole. Trajectories are generated using RRT and evaluated by propagating uncertainty in object pose using an EKF. The trajectory with minimum final uncertainty is selected as optimal. Simulation results show the approach can handle 10-30 times more uncertainty than traditional methods and still complete the task successfully.
The document describes the design and fabrication of an intelligent chalk wipe system. The objectives were to design a low-cost, user-friendly blackboard cleaner machine that reduces the time and effort required for cleaning. The methodology involved using a motor to drive the left and right movement of a cross screw threaded rod to move a wiping system vertically along the blackboard. A sensor detects the right end position and signals the motor to return the wiper. The design aims to automate blackboard cleaning and reduce exposure to chalk dust for teachers and students.
H3O 2014 Technical Report ,Faculty of Engineering at Helwan universityHosam Younis
This document summarizes the mechanical, electrical, and design aspects of an ROV (remotely operated vehicle) created by a team of 5 students. It describes the frame design using aluminum angles for structure and pipes for low drag. 8 thrusters provide motion in various directions. Buoyancy and stability were achieved through calculations. Other systems described include the pneumatic gripper, isolated camera mechanisms, and electrical diagrams. The team's future goals are improving control systems and expanding knowledge. They overcame technical challenges like isolation testing and non-technical challenges by increasing publicity.
This document summarizes a presentation by Ninjaneers on an autonomous hexapod robot capable of object recognition. The objectives were to build a hexapod robot that can overcome limitations of other mobile robots through autonomous movement, visual object recognition using an ultrasonic sensor and camera, and a functional gripper to pick up and move objects. The design and implementation process, programming approach using SimpleCV, tests conducted, and budget are also summarized.
This document describes the process of designing and building a line following robot. The author begins with selecting a two-wheeled configuration with independent drive motors and a caster wheel. A functional block diagram is created to outline the robot's hardware and software components. Various line sensing options are considered, with the QRD1114 optical sensor selected due to its small size and low cost. Placement of multiple sensors in an inverted V configuration is discussed as providing more accurate line position information to enable faster speeds around turns. The overall goal is to create an autonomous robot that can navigate complex black line courses on a white background.
This document describes the process of designing and building a line following robot. The author begins with an overview of the project and outlines the mission to develop an autonomous robot that can follow a black line on a white surface. They discuss different propulsion systems and sensor configurations that were considered. The author settles on a two-wheeled design with independent drives and a free-spinning caster wheel. A functional block diagram is presented and key questions are identified regarding the input sensors, processing, output, and data storage. Different sensor options are evaluated and the author selects the low-cost QRD1114 optical sensor. Placement of multiple sensors in a V-shape configuration is discussed as a way to improve tracking performance.
The document describes a group project to design a light sensing robotic vehicle using a PIC18F4520 microcontroller board, stepper motors, sensors, and switches. The task is for the robot to search for a light source within a bounded area, stop within 15cm of the light, and re-negotiate its path when obstacles are encountered. The group divided responsibilities and spent four weeks implementing the hardware, writing software, and integrating everything onto the robotic platform. By the end of the project, the light sensing robot was able to follow light, avoid boundaries, react to obstacles, and stop at the desired distance from the light source, as demonstrated in videos of its operation.
This document outlines the design of a tour guide robot for the Chambers Technology Center building. It includes sections on the system design, hardware and software research, project development, justifications for design choices, test results, conclusions, applications, lessons learned, and future improvements. The robot uses sensors and microcontrollers to navigate autonomously around obstacles while providing verbal descriptions of points of interest on its tour route. Hardware includes ultrasonic sensors for obstacle avoidance, a compass sensor for navigation, and a Raspberry Pi for voice recognition and speech. Software includes algorithms for navigation and the Voicecommand program. The team developed the system over the semester and tested its performance.
Effect of Self-animated Avatars in Virtual Environmentsmukundraj2
This document outlines two experiments that investigate the effect of self-animated avatars in virtual environments. The first pilot study found that the presence of an avatar improved task performance for some users, depending on individual factors like gaming experience. A follow-up experiment aimed to account for individual differences and examine how immersion and task difficulty impact avatar effects. It used an object orientation matching task with variations in avatar presence, immersion level, and task difficulty. Results showed significance in some conditions, answering questions about how avatars influence user performance and experience in virtual worlds. Future work could explore other environments, tasks, and feedback methods.
The document outlines an activity plan for exploring various sensors and the piezoelectric effect. It includes 5 main activities: 1) evading a motion detector, 2) exploring smart sensors, 3) making a microphone, 4) exploring the piezoelectric effect through building a PVDF polymer, and 5) measuring piezoelectric and pyroelectric responses. It also describes a design project to build a coin counter using the piezoelectric properties of PVDF film.
This document presents a computational model and simulation of place cells using a continuous attractor neural network (CANN). The simulation implements a virtual robot and four environments. Various conditions are applied to the simulation to observe the activation patterns produced by the CANN. The results are compared to biological studies on rat place cells. The model demonstrates place cell behavior consistent with biological studies but requires further development to provide full robot navigation capabilities.
This project involved developing a 1/3 scale robotic transport skid that can follow a leader and navigate autonomously when line of sight is lost. The skid met over 50% of needs statements and over 60% of priority 1 needs. It replicates the movement of a full-scale weapon transport skid and can track and follow a leader wearing a green marker. When line of sight is lost, it uses ultrasonic sensors to navigate and avoid obstacles while searching to reacquire the leader. Testing showed it can successfully follow a leader and navigate corridors to find its target when blinded. Potential improvements include enhancing obstacle detection and developing a non-color based target tracking method.
This resume is for Andika Pramanta Yudha. He graduated with a master's degree in electrical engineering with a 3.96 GPA. His areas of expertise include robotics, control systems, embedded systems, programming languages like C++, Java and MATLAB. Notable projects include developing a delta robot, quadcopter, exoskeleton and humanoid robots for competitions. He has received several awards for his robotics projects and research. The resume provides details on the technical aspects of his projects and the tools and methods used.
Similar to Mimebot: Sphere-shaped Mobile Robot Imitating Rotational Movement (MoMM2016 presentation tsuchida) (20)
A System for Practicing Formations in Dance Performance Supported by Self-Pro...Shuhei Tsuchida
Collapsed formation in a group dance will greatly reduce the quality of the performance even if the dance in the group is synchronized with music. Therefore, learning the formation of a dance in a group is as important as learning its choreography. However, if someone cannot participate in practice, it is difficult for the rest of the members to gain a sense of the proper formation in practice. We propose a practice-support system for performing the formation smoothly using a self-propelled screen even if there is no dance partner. We developed a prototype of the system and investigated whether a sense of presence provided by both methods of practicing formations was close to the sense we really obtain when we dance with humans. The result verified that the sense of dancing with a projected video was closest to the sense of dancing with a dancer, and the trajectory information from dancing with a self-propelled robot was close to the trajectory information from dancing with a dancer. Practicing in situations similar to real ones is able to be done by combining these two methods. Furthermore, we investigated whether the self-propelled screen obtained the advantages of dancing with both methods and found that it only obtained advantages of dancing with projected video.
Automatic System for Editing Dance Videos Recorded Using Multiple CamerasShuhei Tsuchida
As social media has matured, uploading video content has increased. Multiple videos of physical performances, such as dance, are difficult to integrate into high-quality videos without knowledge of video-editing principles. In this study, we present a system that automatically edits dance-performance videos taken from multiple viewpoints into a more attractive and sophisticated dance video. Our system can crop the frame of each camera appropriately by using the performer’s behavior and skeleton information. The system determines the camera switches and cut lengths following a probabilistic model of general cinematography guidelines and of knowledge extracted from expert experience. In this study, our system automatically edited a dance video of four performers taken from multiple viewpoints, and ten video-production experts evaluated the generated video. As a result of a comparison of another automatic editing system, our system tended to be performed better.
AIST Dance Video Database: Multi-Genre, Multi-Dancer, and Multi-Camera Databa...Shuhei Tsuchida
Database
https://aistdancedb.ongaaccel.jp/
AIST Dance Video Database (AIST Dance DB) is a shared database containing original street dance videos with copyright-cleared dance music. This is the first large-scale shared database focusing on street dances to promote academic research regarding Dance Information Processing. The AIST Dance DB will foster a variety of new tasks such as
Dance-motion genre classification
Dancer identification
Dance-technique estimation
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
3. 1. Background
2. Research Purpose
3. Proposed Method
4. Preliminary Study
5. Improvement of System and Evaluation
6. Testing the ability of the proposed system
7. Summarize
4. 1. Background
2. Research Purpose
3. Proposed Method
4. Preliminary Study
5. Improvement of System and Evaluation
6. Testing the ability of the proposed system
7. Summarize
5. 5
A performance involving people and mobile robots
Reference: Dance with Drones, https://www.youtube.com/watch?v=HQLORg5COiU
Problem
・Noisy
・Risk of Crash
・Control difficulty
Background
7. 7
How to provide a stable system
Function
・Failsafe
・Multiplexing
For a general system
System User
Dependability
Background
8. 8
System Performer /
Stage director
Function Stage effects
Audience
How to provide a stable system
For a stage effect system
Background
9. 9
System Performer /
Stage director
Function Stage effects
Audience
How to provide a stable system
For a stage effect system
Background
Dependability
10. 10
System Performer /
Stage director
Function Stage effects
Audience
How to provide a stable system
For a stage effect system
Background
Dependability Appearance Dependability
Stabilize the function with the same stage effect
16. 1. Background
2. Research Purpose
3. Proposed Method
4. Preliminary Study
5. Improvement of System and Evaluation
6. Testing the ability of the proposed system
7. Summarize
17. Problem of a rolling robot 17
Expression of
rotational movement
Agility
Accuracy of
posture estimation
18. Problem of a rolling robot 18
Expression of
rotational movement
Agility
Accuracy of
posture estimation
Cross section
Actual appearance
19. Problem of a rolling robot 19
Expression of
rotational movement
Agility
Accuracy of
posture estimation
Cannot increase acceleration
Cannot make a sudden turn
by centrifugal force
20. Problem of a rolling robot 20
Expression of
rotational movement
Agility
Accuracy of
posture estimation Cannot attach a marker for
motion capture to
the outer surface
21. Problem of a rolling robot 21
Expression of
rotational movement
Agility
Accuracy of
posture estimation
High-agility mobile wheeled rob
22. Problem of a rolling robot 22
Expression of
rotational movement
Agility
Accuracy of
posture estimation
High-agility mobile wheeled rob
Spherical LED display visually
reproduces rotational moveme
23. 1. Background
2. Research Purpose
3. Proposed Method
4. Preliminary Study
5. Improvement of System and Evaluation
6. Testing the ability of the proposed system
7. Summarize
24. Preliminary Investigation on Rotational Movement
Expression of a Sphereical LED Display 24
Purpose
Built a mobile robot equipped
with a spherical LED display
and evaluate its visual effect.
27. 27
Device
Spherical
LED display
Mobile robot
Diameter Φ350mm
Weight 3500g
Speed 0.42m/sec
Preliminary Investigation on Rotational Movement
Expression of a Sphereical LED Display
miniUSB
Multi plug
Breadboard Cover
31. 31
・The larger the surface area to light the light,
the larger the variation of the pattern,
the tendency to increase the illusion effect was observed.
・Nine out of the eleven participants noticed that
the velocity of the sphere and the rotational speed of
the lights were not the same and commented on it.
Result
Preliminary Investigation on Rotational Movement
Expression of a Sphereical LED Display
32. 1. Background
2. Research Purpose
3. Proposed Method
4. Preliminary Study
5. Improvement of System and Evaluation
6. Testing the ability of the proposed system
7. Summarize
33. Improvement of System 33
・Synchronize the rotation of the lights
of the spherical LED display
with the distance moved by the robot.
・In consideration of an actual stage
performance, the driving part should
be as low and as small as possible.
Improvement point
=
37. 37
Motor cover Battery
mbed
Xbee
Spherical LED
display
Mobile robot
Device
Disassembled
Improvement of System
Previous New device
Diamete
r
350mm 200mm
Height 110mm 60mm
Weight 3500g 600g
CPU H8/36064 STM32F405
Table: Spec
38. 38
Acceleration comparison test of spherical mobile robots
the arrival times
for a distance
of 1.0 m
・Sphero
1.39 sec
・Propsed Robot
1.00 sec
Improvement of System
39. Experiment 41
Experiment items
1. Whether participants who did not see the movements of
the spherical mobile robot experience the optical illusion of a
rolling spheres.
2. The relationship between the velocity of the robot and the
perception of the optical illusion.
3. Influence of the deviation between the amount of rotation
and the moving distance on the illusion.
Participants
12 males (Average age 22.7)
→Possibility of alternative mechanism
→Allowable range of alternative mechanism
→Allowable range of alternative mechanism
41. 43
Experiment procedure
Step 1 → Step 2 → Step 3 → Step 4
6 m
4.4 m
2 m
0.5 m
0.23 m
Experiment
A participant watches a
performance involving
the mobile robot equipped
with a spherical LED display
and has no knowledge about
how it operates.
Performer
Par$cipant
44. 46
Experiment procedure
Step 1 → Step 2 → Step 3 → Step 4
Experiment
・a spherical LED display presents the rotation (0.6 m/s).
・After the presentation, an experimenter orally asks the
participant two questions:.
Q1: Did the sphere appear to be rolling?
Q2: What mechanism do you think makes the sphere moves?
48. 50
Experiment procedure
Step 1 → Step 2 → Step 3 → Step 4
Experiment
Participant
・Participants confirm
the actual LED ball
rolling from the same
position as in Step 1.
49. 51
Experiment procedure
Step 1 → Step 2 → Step 3 → Step 4
Experiment
Participant
・Participants confirm
the actual LED ball
rolling from the same
position as in Step 1.
被験者
Actual LED ball
Spherical LED display
50. 52
・The LED display shows showed the following
7 visual effect patterns.
Velocity:
0.3, 0.6, 0.9, 1.2 [m/s]
Amount of rotation of lights with respect to the moving
distance:
0.5, 1.5, 2.0 (velocity 0.6 m/s)
Experiment procedure
Step 1 → Step 2 → Step 3 → Step 4
Experiment
52. 54
Q1: Did the sphere appear to be rolling?
Experiment
Step 1 → Step 2 → Step 3 → Step 4
Yes
12 participants
11
Results and Consideration
53. 55
Experiment
Step 1 → Step 2 → Step 3 → Step 4
Results and Consideration
Q2: What mechanism do you think makes the sphere moves?
「I thought that the sphere was just rolling. 」
「I thought that the LED ball was rolling by running on electric rails. 」
「I thought there was a driving part inside the LED ball and
that the LED ball was rolling because its center of gravity
was moving. 」
54. 56
Experiment
Step 1 → Step 2 → Step 3 → Step 4
Results and Consideration
Q2: What mechanism do you think makes the sphere moves?
「I thought that the sphere was just rolling. 」
「I thought that the LED ball was rolling by running on electric rails. 」
「I thought there was a driving part inside the LED ball and
that the LED ball was rolling because its center of gravity
was moving. 」
These answers confirmed that the
participants perceived the sphere to be
physically rolling.
55. 57
Experiment
Step 1 → Step 2 → Step 3 → Step 4
Results and Consideration
Participants didn’t notice
・We explained the mechanism and confirmed whether
the participants had noticed the mechanism.
12 participants
12
56. 58
Experiment
Experiment items
1. Whether or not participants who did not see the movements
of the spherical mobile robot
experience the optical illusion of a rolling spheres.
2. The relationship between the velocity of the robot and the
perception of the optical illusion.
3. Influence of the deviation between the amount of rotation
and the moving distance on the illusion.
57. 59
Experiment
Experiment items
Participants feel the ball is actually rolling.
1. Whether or not participants who did not see the movements
of the spherical mobile robot
experience the optical illusion of a rolling spheres.
2. The relationship between the velocity of the robot and the
perception of the optical illusion.
3. Influence of the deviation between the amount of rotation
and the moving distance on the illusion.
58. 60
・Participants watch an actual rolling LED ball.
(For evaluation criteria).
Experiment
Step 1 → Step 2 → Step 3 → Step 4
Results and Consideration
60. 62
Q: Did the sphere appear to be rolling?
・The relationship between the velocity of the robot
and the perception of the optical illusion.
Figure: Average value of the evaluation of 12 participants.
When the speed was extremely fast / slow,
the illusion effect tended to weaken.
Experiment
Results and Consideration
61. 63
Experiment
Experiment items
1. Whether or not participants who did not see the movements
of the spherical mobile robot
experience the optical illusion of a rolling spheres.
2. The relationship between the velocity of the robot and the
perception of the optical illusion.
3. Influence of the deviation between the amount of rotation
and the moving distance on the illusion.
Participants feel the ball is actually rolling.
When the speed was extremely fast / slow,
the illusion effect tended to weaken.
62. 64
・Influence of the deviation between the amount of
rotation and the moving distance on the illusion.
🦐 🦐 p < .05
Experiment
Results and Consideration
Q: Did the sphere appear to be rolling?
Figure: Average value of the evaluation of 12 participants.
The amount of rotation [m] / The moving distance [m]
63. 65
Experiment
Experiment items
1. Whether or not participants who did not see the movements
of the spherical mobile robot
experience the optical illusion of a rolling spheres.
2. The relationship between the velocity of the robot and the
perception of the optical illusion.
3. Influence of the deviation between the amount of rotation
and the moving distance on the illusion.
Participants feel the ball is actually rolling.
When the speed was extremely fast / slow,
the illusion effect tended to weaken.
The illusion effect diminishes as the rotation amount is
increased with respect to the movement distance.
64. 66
Experiment
Experiment summary
• Eleven out of twelve participants believed the illusion that the
sphere was rolling when they watched the performance,
which is surprising.
• The improved hardware and software made the illusion
possible even when the LED ball moved too slowly or too
quickly.
• We found that attention needs to be paid to the relationship
between rotation and moving distance when setting up a
performance.
65. 1. Background
2. Research Purpose
3. Proposed Method
4. Preliminary Study
5. Improvement of System and Evaluation
6. Testing the ability of the proposed system
7. Summarize
68. 70
Experiment
Comments by Performer
• Until being told how the mechanism worked, I was convinced that
the sphere was physically rotating on its own.
• It looked as if the sphere was attached to the wall, and rolling on it.
• It seemed that the camera angle was changing, which was very
interesting.
• It looked as if the robot was surprised.
• It looked as if the video’s playback speed was changing.
• In one scene the robot was gentlemanlike.
Comments by Audience
A spherical robot on a moving stand demonstrated higher power
output and mobility than expected.
It was able to move very naturally during scenes such as when it
pushed against a wall without appearing too burdened.
69. 1. Background
2. Research Purpose
3. Proposed Method
4. Preliminary Study
5. Improvement of System and Evaluation
6. Testing the ability of the proposed system
7. Summarize
70. Summarize 72
・We propose the concept of using pseudo-physical
movements in performances with robots.
・We built a robot that visually reproduces
the movements of a rolling sphere and is capable of
faster movements and easier position estimations in
comparison with previous spherical robots.
・We created a performance that included the robot
interacting with a professional performer.