The slide used in ITSC 2019. In this work we did followings:
- Simulate driver's attention on a 3D map
- Analyze the potential colliding hazard obstacles
- Show the boundary that divides expected and unexpected obstacles during the drving
Development of Nighttime Visibility Assessment System for road using a Low Li...inventionjournals
Although the numbers of traffic accidents and fatalities in Korea have been decreased constantly, traffic accidents during night time have not been decreased. Thus, it is necessary to conduct comprehensive studies that can investigate, analyze, and assess the visibility environment of drivers in order to ensure safety in roads during nighttime. The purpose of this study is to develop the technology of acquiring and analyzing the nighttime driving environment in roads from driver's viewpoints. For this purpose, this study suggests a nighttime visibility assessment system that can quantify suitability. To do this, this study defined driver's visibility and selected effectiveness scale thereby developing an assessment model that reflected driver's level of recognition. The suggested system is developed consisting of two parts: the investigation device using a low light cameraequipped with investigation program and the web-based assessment program utilizing the document database. In the future, verification on the system will be conducted under various drivers’ visual environments and pilot field application will be planned to improve accuracy of assessment on nighttime road visibility based on the system.
The document proposes a master's thesis to develop methods for measuring a driver's situational awareness during the transition from highly automated driving to manual control. It involves using sensors like eye trackers and cameras to recognize driver activities and detect their level of attention. Features will be extracted from eye and head movements to calculate a measure of situational awareness. A driving simulator will be used along with sensors to classify activities and evaluate how quickly drivers can resume control. The goal is to help vehicles determine if drivers are ready to manually drive during transitions from automated to manual modes.
Spot speed studies involve measuring the instantaneous speeds of vehicles at a point on the road. There are two main methods - measuring the time taken to travel a short distance or using a radar speed meter. Spot speeds are useful for traffic planning, road design, setting speed limits, and accident analysis. The radar method is efficient as it can instantly and automatically measure and record speeds accurately. Time-mean speed is the average of all instantaneous speeds measured, while space-mean speed represents the average speed of all vehicles traveling along a road section. Spot speed studies provide important input for various traffic engineering problems.
A major challenge for the next decade is to design virtual and augmented reality systems (VR at large) for real-world use cases such as healthcare, entertainment, e-education, and high-risk missions. This requires VR systems to operate at scale, in a personalized manner, remaining bandwidth-tolerant whilst meeting quality and latency criteria. One key challenge to reach this goal is to fully understand and anticipate user behaviours in these mixed reality settings.
This can be accomplished only by a fundamental revolution of the network and VR systems that have to put the interactive user at the heart of the system rather than at the end of the chain. With this goal in mind, in this talk, we describe our current researches on user-centric systems. First, we describe our view-port based streaming strategies for 360-degree video. Then, we present more in details our research on of users‘ behaviour analysis, when users interact with the 360-degree content. Specifically, we describe a set of metrics that allows us to identify key behaviours among users and quantify the level of similarity of these behaviours. Specifically, we present our clique-based clustering methodology, information theory and trajectory base in-depth analysis. Finally, we conclude with an overview of the extension of this work to navigation within volumetric video sequences.
This document describes using fuzzy logic for robot navigation. Ultrasonic sensors are mounted on a robot to detect obstacles to the right, front, and left. Fuzzy logic is used to coordinate multiple reactive behaviors like obstacle avoidance, following edges, and moving toward a target. Simulation results show the strategy allows efficient navigation in complex environments. The robot can avoid obstacles, decelerate at turns, escape U-shapes, and reach targets using integrated ultrasonic sensors and fuzzy behavior control.
Development of Nighttime Visibility Assessment System for road using a Low Li...inventionjournals
Although the numbers of traffic accidents and fatalities in Korea have been decreased constantly, traffic accidents during night time have not been decreased. Thus, it is necessary to conduct comprehensive studies that can investigate, analyze, and assess the visibility environment of drivers in order to ensure safety in roads during nighttime. The purpose of this study is to develop the technology of acquiring and analyzing the nighttime driving environment in roads from driver's viewpoints. For this purpose, this study suggests a nighttime visibility assessment system that can quantify suitability. To do this, this study defined driver's visibility and selected effectiveness scale thereby developing an assessment model that reflected driver's level of recognition. The suggested system is developed consisting of two parts: the investigation device using a low light cameraequipped with investigation program and the web-based assessment program utilizing the document database. In the future, verification on the system will be conducted under various drivers’ visual environments and pilot field application will be planned to improve accuracy of assessment on nighttime road visibility based on the system.
The document proposes a master's thesis to develop methods for measuring a driver's situational awareness during the transition from highly automated driving to manual control. It involves using sensors like eye trackers and cameras to recognize driver activities and detect their level of attention. Features will be extracted from eye and head movements to calculate a measure of situational awareness. A driving simulator will be used along with sensors to classify activities and evaluate how quickly drivers can resume control. The goal is to help vehicles determine if drivers are ready to manually drive during transitions from automated to manual modes.
Spot speed studies involve measuring the instantaneous speeds of vehicles at a point on the road. There are two main methods - measuring the time taken to travel a short distance or using a radar speed meter. Spot speeds are useful for traffic planning, road design, setting speed limits, and accident analysis. The radar method is efficient as it can instantly and automatically measure and record speeds accurately. Time-mean speed is the average of all instantaneous speeds measured, while space-mean speed represents the average speed of all vehicles traveling along a road section. Spot speed studies provide important input for various traffic engineering problems.
A major challenge for the next decade is to design virtual and augmented reality systems (VR at large) for real-world use cases such as healthcare, entertainment, e-education, and high-risk missions. This requires VR systems to operate at scale, in a personalized manner, remaining bandwidth-tolerant whilst meeting quality and latency criteria. One key challenge to reach this goal is to fully understand and anticipate user behaviours in these mixed reality settings.
This can be accomplished only by a fundamental revolution of the network and VR systems that have to put the interactive user at the heart of the system rather than at the end of the chain. With this goal in mind, in this talk, we describe our current researches on user-centric systems. First, we describe our view-port based streaming strategies for 360-degree video. Then, we present more in details our research on of users‘ behaviour analysis, when users interact with the 360-degree content. Specifically, we describe a set of metrics that allows us to identify key behaviours among users and quantify the level of similarity of these behaviours. Specifically, we present our clique-based clustering methodology, information theory and trajectory base in-depth analysis. Finally, we conclude with an overview of the extension of this work to navigation within volumetric video sequences.
This document describes using fuzzy logic for robot navigation. Ultrasonic sensors are mounted on a robot to detect obstacles to the right, front, and left. Fuzzy logic is used to coordinate multiple reactive behaviors like obstacle avoidance, following edges, and moving toward a target. Simulation results show the strategy allows efficient navigation in complex environments. The robot can avoid obstacles, decelerate at turns, escape U-shapes, and reach targets using integrated ultrasonic sensors and fuzzy behavior control.
Human Movement Recognition Using Internal Sensors of a Smartphone-based HMD (...sugiuralab
The document proposes recognizing human movements using the internal sensors of a smartphone in a head-mounted display (HMD) without external controllers. It collected sensor data from participants performing 16 movements and used machine learning to recognize the movements with 92.03% accuracy on average. However, there was a long time lag between movement detection and recognition completion. Shortening the sensor recording time decreased accuracy but could enable faster recognition.
Eye Gesture Analysis for Prevention of Road Accidentsijsrd.com
Around the globe, death is a daily occurrence mainly due to accidents. Research has been conducted intensively to attempt reduce accidents and extemporize the Driver Assistance System. The core idea for this paper is depicted through a process evolved to enhance effectively the Intelligent Driver assistance system and also a safety system to access the driver's perspectives with vehicles. The system uses a dynamic CCD camera in the vehicle that observes the driver's face. A prototype to match the approach is used to compare the Driver's eye pattern with a set of existing templates of the driver gazing at various focal points inside the vehicle. The windscreen is further divided into segments and a comparison of the driver's eye gaze pattern with the existing stencil determines the driver's view point on the windscreen. For instance, in case the driver is detected to be drowsy with closed eyelids for more than a few seconds then he will be alerted automatically.
This document summarizes research on evaluating overtaking sight distance using drivers' psycho-emotional responses rather than just physical road parameters. The researchers measured galvanic skin response and heart rate in over 120 real overtaking maneuvers to determine when drivers felt stress. They found that standard sight distances are insufficient for over half of observed maneuvers. Drivers often overtake without safe information or exceeding the speed limit. Measured sight distances exceeded design values by 20-40%, so more research is needed to understand drivers' emotional responses during overtaking.
Automated Laser Scanning System For Reverse Engineering And InspectionJennifer Daniel
This document summarizes an automated laser scanning system developed for reverse engineering and inspection of parts with freeform surfaces. The system generates optimal scan plans considering parameters like view angle, depth of field, and occlusion. It uses a laser scanner mounted on a motorized rotary table to automatically scan parts according to the generated scan plans. The point data is then automatically registered and evaluated by comparing to CAD models. The system aims to automate the scanning process for more efficient inspection and reverse engineering of complex parts.
The document presents a seminar on driver drowsiness detection. It discusses the increasing problem of accidents due to drowsy driving and outlines the objectives of building a system to detect driver drowsiness in real-time through monitoring eye blinks and alerting the driver. The proposed methodology uses a behavioral approach including eye detection, blink counting, and analysis to determine drowsiness levels and provide alerts or vehicle control interventions if needed.
This document discusses human factors that affect road safety. It begins by outlining the objectives of understanding road traffic safety, human factors, and causes of accidents. It then defines human factors and how they influence driver behavior and crash causes, such as attention, perception, and reaction time. The document also examines the driving task model and how road design can support driver expectations and abilities through consistent information presentation and accommodation of human limitations. In conclusion, it stresses the importance of road safety education to positively guide road users.
This document describes an experimental evaluation of different user interfaces for visual indoor navigation. The study compared augmented reality (AR) to virtual reality (VR) and found that VR was faster and seemed more accurate to users. It also tested a feature indicator and found it increased the number of identifiable features in images. Finally, it evaluated object highlighting and found a soft border version was less distracting than a framed version. The novel user interfaces improved localization accuracy and were more effective and popular than traditional AR interfaces.
This document summarizes a survey of traffic data visualization techniques. It describes how traffic data is collected from sensors and organized. Common visualization methods are discussed like using lines and regions to show trajectories and aggregated spatial data, and space-time cubes to visualize trajectories over time. The document also outlines techniques for visualizing multiple attributes, analyzing patterns and clustering trajectories, monitoring traffic situations, and exploring data. Future work opportunities are developing systems for big data, situation awareness, and integrating different visualization views.
In advance accident alert system & Driver Drowsiness DetectionIRJET Journal
This document presents a driver drowsiness detection system that uses video clips and facial tracking algorithms to monitor drivers for signs of fatigue such as eye closure, yawning, and head tilt. It detects 68 facial landmarks to analyze the eye aspect ratio, mouth aspect ratio, and head position over time. If eyes are closed for more than 5 seconds, frequent yawning occurs, or the head is not straight, the system will alert the driver via sounds and send SMS alerts with GPS location to concerned contacts for safety. The system aims to help prevent accidents caused by drowsy driving in a low-cost, portable, and accurate manner.
Presentation on Spot Speed Study Analysis for the course CE 454nazifa tabassum
This presentation describes the process of Spot Speed Study Analysis, how it can be performed and how the findings from such studies can help to improve road design in urban areas.
This document provides a summary of a project report on coded target detection and 3D coordinate point generation using photogrammetry. The objective of the project was to generate 3D coordinate points from a set of 2D images of an object embedded with retroreflective dots. Several images of the object were taken from different angles and processed to detect the dots, match corresponding dots across images, perform camera calibration, and calculate the exterior orientation to ultimately generate the 3D coordinate points through bundle adjustment. The algorithm involved steps like grayscale conversion, dot detection, pattern matching, internal camera calibration, external orientation, and generating the 3D coordinates. A graphical user interface was developed in MATLAB to implement the algorithm and output the 3D points.
Deep Learning Algorithm Using Virtual Environment Data For Self-Driving Carsushilkumar1236
The document presents a deep learning algorithm for a self-driving car that uses computer vision techniques. It discusses using cameras, sensors, and machine learning models to process image data for tasks like lane detection, road sign identification, obstacle detection and avoidance. The design uses a convolutional neural network trained on thousands of images to classify objects. Experimental results showed this approach can reliably perform key computer vision tasks necessary for autonomous driving.
Automated Motion Detection from space in sea surveillanceLiza Charalambous
This document summarizes research on automated motion detection of vessels from space using satellite imagery. The researchers used ALOS PRISM satellite triplets to detect vessel movement in ports in Cyprus. Through image segmentation, pattern extraction and description, and proximity searching between images, the method detected vessel movement, speed, and direction. It achieved over 90% detection rates but struggled with small vessels. Combining results from multiple image sets improved reliability. The researchers conclude motion detection from satellites can provide critical maritime security information when combined with other data sources.
Abstract Simulation Scenario Generation for Autonomous Vehicle VerificationM. Ilhan Akbas
Simulation’s necessity in AV verification
Our approach to simulation within an AV verification framework
Our approach for the verification of AV decision making
Definition and creation of scenarios for simulation
A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...ITIIIndustries
The suggested method helps predicting vehicles movement in order to give the driver more time to react and avoid collisions on roads. The algorithm is dynamically modelling the road scene around the vehicle based on the data from the onboard camera. All moving objects are monitored and represented by the dynamic model on a 2D map. After analyzing every object’s movement, the algorithm predicts its possible behavior.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document provides a literature review of lane detection techniques for real-time road lane detection systems. It discusses how lane detection is an important aspect of intelligent transportation systems and driver assistance systems. The review covers various existing approaches to lane detection including image processing methods, edge detection, the Hough transform, and lane departure recognition. It identifies some limitations in existing methods, such as poor performance under difficult environmental conditions or on curved roads. The document proposes developing a new lane detection method to address these limitations and improve accuracy for real-time applications.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
Human Movement Recognition Using Internal Sensors of a Smartphone-based HMD (...sugiuralab
The document proposes recognizing human movements using the internal sensors of a smartphone in a head-mounted display (HMD) without external controllers. It collected sensor data from participants performing 16 movements and used machine learning to recognize the movements with 92.03% accuracy on average. However, there was a long time lag between movement detection and recognition completion. Shortening the sensor recording time decreased accuracy but could enable faster recognition.
Eye Gesture Analysis for Prevention of Road Accidentsijsrd.com
Around the globe, death is a daily occurrence mainly due to accidents. Research has been conducted intensively to attempt reduce accidents and extemporize the Driver Assistance System. The core idea for this paper is depicted through a process evolved to enhance effectively the Intelligent Driver assistance system and also a safety system to access the driver's perspectives with vehicles. The system uses a dynamic CCD camera in the vehicle that observes the driver's face. A prototype to match the approach is used to compare the Driver's eye pattern with a set of existing templates of the driver gazing at various focal points inside the vehicle. The windscreen is further divided into segments and a comparison of the driver's eye gaze pattern with the existing stencil determines the driver's view point on the windscreen. For instance, in case the driver is detected to be drowsy with closed eyelids for more than a few seconds then he will be alerted automatically.
This document summarizes research on evaluating overtaking sight distance using drivers' psycho-emotional responses rather than just physical road parameters. The researchers measured galvanic skin response and heart rate in over 120 real overtaking maneuvers to determine when drivers felt stress. They found that standard sight distances are insufficient for over half of observed maneuvers. Drivers often overtake without safe information or exceeding the speed limit. Measured sight distances exceeded design values by 20-40%, so more research is needed to understand drivers' emotional responses during overtaking.
Automated Laser Scanning System For Reverse Engineering And InspectionJennifer Daniel
This document summarizes an automated laser scanning system developed for reverse engineering and inspection of parts with freeform surfaces. The system generates optimal scan plans considering parameters like view angle, depth of field, and occlusion. It uses a laser scanner mounted on a motorized rotary table to automatically scan parts according to the generated scan plans. The point data is then automatically registered and evaluated by comparing to CAD models. The system aims to automate the scanning process for more efficient inspection and reverse engineering of complex parts.
The document presents a seminar on driver drowsiness detection. It discusses the increasing problem of accidents due to drowsy driving and outlines the objectives of building a system to detect driver drowsiness in real-time through monitoring eye blinks and alerting the driver. The proposed methodology uses a behavioral approach including eye detection, blink counting, and analysis to determine drowsiness levels and provide alerts or vehicle control interventions if needed.
This document discusses human factors that affect road safety. It begins by outlining the objectives of understanding road traffic safety, human factors, and causes of accidents. It then defines human factors and how they influence driver behavior and crash causes, such as attention, perception, and reaction time. The document also examines the driving task model and how road design can support driver expectations and abilities through consistent information presentation and accommodation of human limitations. In conclusion, it stresses the importance of road safety education to positively guide road users.
This document describes an experimental evaluation of different user interfaces for visual indoor navigation. The study compared augmented reality (AR) to virtual reality (VR) and found that VR was faster and seemed more accurate to users. It also tested a feature indicator and found it increased the number of identifiable features in images. Finally, it evaluated object highlighting and found a soft border version was less distracting than a framed version. The novel user interfaces improved localization accuracy and were more effective and popular than traditional AR interfaces.
This document summarizes a survey of traffic data visualization techniques. It describes how traffic data is collected from sensors and organized. Common visualization methods are discussed like using lines and regions to show trajectories and aggregated spatial data, and space-time cubes to visualize trajectories over time. The document also outlines techniques for visualizing multiple attributes, analyzing patterns and clustering trajectories, monitoring traffic situations, and exploring data. Future work opportunities are developing systems for big data, situation awareness, and integrating different visualization views.
In advance accident alert system & Driver Drowsiness DetectionIRJET Journal
This document presents a driver drowsiness detection system that uses video clips and facial tracking algorithms to monitor drivers for signs of fatigue such as eye closure, yawning, and head tilt. It detects 68 facial landmarks to analyze the eye aspect ratio, mouth aspect ratio, and head position over time. If eyes are closed for more than 5 seconds, frequent yawning occurs, or the head is not straight, the system will alert the driver via sounds and send SMS alerts with GPS location to concerned contacts for safety. The system aims to help prevent accidents caused by drowsy driving in a low-cost, portable, and accurate manner.
Presentation on Spot Speed Study Analysis for the course CE 454nazifa tabassum
This presentation describes the process of Spot Speed Study Analysis, how it can be performed and how the findings from such studies can help to improve road design in urban areas.
This document provides a summary of a project report on coded target detection and 3D coordinate point generation using photogrammetry. The objective of the project was to generate 3D coordinate points from a set of 2D images of an object embedded with retroreflective dots. Several images of the object were taken from different angles and processed to detect the dots, match corresponding dots across images, perform camera calibration, and calculate the exterior orientation to ultimately generate the 3D coordinate points through bundle adjustment. The algorithm involved steps like grayscale conversion, dot detection, pattern matching, internal camera calibration, external orientation, and generating the 3D coordinates. A graphical user interface was developed in MATLAB to implement the algorithm and output the 3D points.
Deep Learning Algorithm Using Virtual Environment Data For Self-Driving Carsushilkumar1236
The document presents a deep learning algorithm for a self-driving car that uses computer vision techniques. It discusses using cameras, sensors, and machine learning models to process image data for tasks like lane detection, road sign identification, obstacle detection and avoidance. The design uses a convolutional neural network trained on thousands of images to classify objects. Experimental results showed this approach can reliably perform key computer vision tasks necessary for autonomous driving.
Automated Motion Detection from space in sea surveillanceLiza Charalambous
This document summarizes research on automated motion detection of vessels from space using satellite imagery. The researchers used ALOS PRISM satellite triplets to detect vessel movement in ports in Cyprus. Through image segmentation, pattern extraction and description, and proximity searching between images, the method detected vessel movement, speed, and direction. It achieved over 90% detection rates but struggled with small vessels. Combining results from multiple image sets improved reliability. The researchers conclude motion detection from satellites can provide critical maritime security information when combined with other data sources.
Abstract Simulation Scenario Generation for Autonomous Vehicle VerificationM. Ilhan Akbas
Simulation’s necessity in AV verification
Our approach to simulation within an AV verification framework
Our approach for the verification of AV decision making
Definition and creation of scenarios for simulation
A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...ITIIIndustries
The suggested method helps predicting vehicles movement in order to give the driver more time to react and avoid collisions on roads. The algorithm is dynamically modelling the road scene around the vehicle based on the data from the onboard camera. All moving objects are monitored and represented by the dynamic model on a 2D map. After analyzing every object’s movement, the algorithm predicts its possible behavior.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document provides a literature review of lane detection techniques for real-time road lane detection systems. It discusses how lane detection is an important aspect of intelligent transportation systems and driver assistance systems. The review covers various existing approaches to lane detection including image processing methods, edge detection, the Hough transform, and lane departure recognition. It identifies some limitations in existing methods, such as poor performance under difficult environmental conditions or on curved roads. The document proposes developing a new lane detection method to address these limitations and improve accuracy for real-time applications.
Similar to Safety Criteria Analysis for Negotiating Blind Corners in Personal Mobility Vehicles Based on Driver’s Attention Simulation on 3D Map (20)
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
Travis Hills of MN is Making Clean Water Accessible to All Through High Flux ...Travis Hills MN
By harnessing the power of High Flux Vacuum Membrane Distillation, Travis Hills from MN envisions a future where clean and safe drinking water is accessible to all, regardless of geographical location or economic status.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Immersive Learning That Works: Research Grounding and Paths Forward
Safety Criteria Analysis for Negotiating Blind Corners in Personal Mobility Vehicles Based on Driver’s Attention Simulation on 3D Map
1. Safety Criteria Analysis for Negotiating Blind
Corners in Personal Mobility Vehicles Based on
Driver’s Attention Simulation on 3D Map
Nagoya University, Japan
Naoki Akai, Takatsugu Hirayama, Luis Yoichi Morales, and Hiroshi Murase
2. Background
⚫ Can we say that autonomous navigation system are safe?
• There is a trade-off relationship between safety and speed [1]
• Over safe navigation compromises speed and smoothness
• How human drivers determine the trade-off relationship?
[1] Y. Yoshihara, L.Y. Morales, N. Akai et al. Autonomous predictive driving for blind intersections. In Proc. of the IEEE/RSJ IROS, pp. 3452-3459, 2017.
3. Motivation
⚫ Find the reasonable trade-off relationship from human's driving data
• Over safe is of course not suitable for autonomous navigation
• However, safety must be guaranteed
• Fina a point of compromise for autonomous navigation
Focus on wheelchair type
personal mobility vehicle (PMVs)
4. Approach
⚫ Show a limitation of human drivers with numerical values
• Humans have limitation, e.g., cannot observe obstacles locating at
occluded areas
• However, they can smoothly negotiate blind corners
• Assume that navigation under the limitation similar to human’s driving
⚫ Simulate driver’s attention on a 3D map using robotic technologies
• Analyze potential colliding hazard obstacles that drivers cannot
avoid if they rash out from occluded areas
5. Platform (Robotic wheelchair type PMV)
⚫ The PMV is able to estimate driver’s eye-gaze direction in a 3D map
• The PMV first recognizes its own position on the 3D map [2]
• The PMV then estimates the eye-gaze direction using a motion
capture that tracks the eye-gaze measurement glasses
• Occluded areas for a driver can be accurately estimated
[2] N. Akai et al. Mobile robot localization considering class of sensor observations. In Proc. of the IEEE/RSJ IROS, pp. 3159-3166, 2018.
6. Potential colliding hazard obstacles (PCHOs) simulation
⚫ PCHOs are obstacles that drivers cannot avoid if they suddenly rush
out from the occluded areas
• The minimum linear velocity and collision angle between the PMV
and the obstacle are recorded if there are PCHOs
Simulate obstacles that definitely
collide against the PMV
Unrealistic velocity and
collision angle are observed
Unexpected parameters for the
drivers are obtained
7. Example of driver’s attention and PCHOs’ simulations
⚫ The PCHOs (cubes) are observed when passing blind corners
• The color of the cubes represents level of the linear velocity
Movie
https://www.yout
ube.com/watch?
v=71jKnTve2-k
8. Experimental conditions
⚫ Driving of four participants were analyzed in an indoor environment
• One skill-full driver (SD) and three non-skill-full drivers (NSDs)
• One participant respectively drove three CW and CCW trials
[3] T. Hatada. Psychological and physiological analysis of stereoscopic vision. Journal of Robotics and Mechatronics, 4(1):13–19, 1992.
[4] T. Miura. Visual search in intersections: An underlying mechanism. IATSS Research, 16:42–49, 1992.
⚫ Perception ability of obstacles was
defined while referring [3, 4]
• 90 horizontal and 60 vertical degrees
• Assume that the drivers are able to
observe all obstacles which exist in
the field of view
9. Result by the SD (CW)
⚫ Example of a result by the SD in a blind corner
• Left: PMV’s trajectory with velocity and eye-gaze directions
• Right: PCHOs (size of the circles represents level of the velocity)
10. Results by the NSDs (CW)
⚫ Similar results to that of the SD were confirmed in the same corner
NSD1 NSD2 NSD3
Eye-gazedirectionsPCHOs
11. Comparison of eye-gaze behaviors
⚫ Eye-gaze angles of yaw and pitch axes are not similar
• The SD carefully watched left and right sides (top left)
• However, the similar PCHOs were observed in the all trials
12. Other results (CCW)
⚫ Similar results between the all participants were also confirmed
• The PCHOs were also found in the all trials
NSD1 NSD2 NSD3
Eye-gaze
directions
PCHOs
SD
13. Summarize the PCHOs’ parameters
⚫ The boundary can be seen in the collision angle-linear velocity plot
• The boundary could divide expected and unexpected obstacles
Expected
Unexpected
14. Conclusion
⚫ Motivation and approach
• Want to show numerical values to evaluate safe driving behaviors
• Develop the platform that is able to estimate the driver’s attention
in a 3D map, and simulate the PCHOs for the drivers
⚫ Results
• It was confirmed that the simulated PCHOs were unrealistic since
they have significant large velocity and collision angle
• It was shown that there is a boundary in the collision angle-linear
velocity plot of the simulated PCHOs
• We concluded that the boundary divides expected and unexpected
obstacles during driving, thus it could be the numerical criterion