This slide is presented in CAADRIA2012 (The 17th International Conference on Computer Aided Architectural Design Research in Asia).
Abstract. This research presents the development of a sensor oriented mobile AR system which realizes geometric consistency using GPS, a gyroscope and a video camera which are mounted in a smartphone for urban landscape assessment. A low cost AR system with high flexibility is realized. Consistency of the viewing angle of a video camera and a CG virtual camera, and geometric consistency between a video image and 3DCG are verified. In conclusion, the proposed system was evaluated as feasible and effective.
GOAR: GIS Oriented Mobile Augmented Reality for Urban Landscape AssessmentTomohiro Fukuda
This slide is presented in CMC2012 (2012 4th International Conference on
Communications, Mobility, and Computing).
Abstract. This research presents the development of a mobile AR system which realizes geometric consistency
using GIS, a gyroscope and a video camera which are mounted in a smartphone for urban landscape assessment. A low cost AR system with high flexibility is developed.
Geometric consistency between a video image and 3DCG are verified. In conclusion, the proposed system was evaluated as feasible and effective.
Availability of Mobile Augmented Reality System for Urban Landscape SimulationTomohiro Fukuda
This slide is presented in CDVE2012 (The 9th International Conference on Cooperative Design, Visualization, and Engineering).
Abstract. This research presents the availability of a landscape simulation method for a mobile AR (Augmented Reality), comparing it with photo montage and VR (Virtual Reality) which are the main existing methods. After a pilot experiment with 28 subjects in Kobe city, a questionnaire about three landscape simulation methods was implemented. In the results of the questionnaire, the mobile AR method was well evaluated for reproducibility of a landscape, operability, and cost. An evaluation rated as better than equivalent was obtained in comparison with the existing methods. The suitability of mobile augmented reality for landscape simulation was found to be high.
DISTRIBUTED AND SYNCHRONISED VR MEETING USING CLOUD COMPUTING: Availability a...Tomohiro Fukuda
This slide is presented in CAADRIA2012 (The 17th International Conference on Computer Aided Architectural Design Research in Asia).
Abstract. The mobility of people's activities, and cloud computing technologies are becoming advanced in the modern age of information and globalisation. This study describes the availability of discussing spatial design while sharing a 3-dimensional virtual space with stakeholders in a distributed and synchronised environment. First of all, a townscape design support system based on a cloud computing type VR system is constructed. Next, an experiment of a distributed and synchronised discussion of townscape design is executed with subjects who are specialists in the townscape design field. After the experiment, both qualitative mental evaluation and quantitative evaluation were carried out. The conclusions are as follows: 1. Users who use VR frequently and who use videoconferencing consider that the difference with face-to-face discussion is small. 2. A Moiré pattern may occur in a gradation picture. 3. The availability of distributed and synchronised discussions with cloud computing type VR is high.
A STUDY OF VARIATION OF NORMAL OF POLY-GONS CREATED BY POINT CLOUD DATA FOR A...Tomohiro Fukuda
This slide is presented in CAADRIA2011 (The 16th International Conference on Computer Aided Architectural Design Research in Asia).
Abstracts: Acquiring current 3D space data of cities, buildings, and rooms rapidly and in detail has become indispensable. When the point cloud data of an object or space scanned by a 3D laser scanner is converted into polygons, it is an accumulation of small polygons. When object or space is a closed flat plane, it is necessary to merge small polygons to reduce the volume of data, and to convert them into one polygon. When an object or space is a closed flat plane, each normal vector of small polygons theoretically has the same angle. However, in practise, these angles are not the same. Therefore, the purpose of this study is to clarify the variation of the angle of a small polygon group that should become one polygon based on actual data. As a result of experimentation, no small polygons are converted by the point cloud data scanned with the 3D laser scanner even if the group of small polygons is a closed flat plane lying in the same plane. When the standard deviation of the extracted number of polygons is assumed to be less than 100, the variation of the angle of the normal vector is roughly 7 degrees.
CAAD FUTURES 2015: Development of High-definition Virtual Reality for Histo...Tomohiro Fukuda
This slide is our research presentation in the 16th CAAD Futures 2015 Conference, at MASP, Sao Paulo, Brazil.
Keywords: Cultural heritage, digital reconstruction, Virtual Reality, visualization, 3D modeling, presentation.
Abstract: This study shows fundamental data for constructing a high-definition VR application under the theme of a three-dimensional visualization to restore past architecture and cities. It is difficult for widespread architectural and urban objects to be rendered in real-time. Thus, in this study, techniques for improving the level of detail (LOD) and representation of natural objects were studied. A digital reconstruction project of Azuchi Castle and old castle town was targeted as a case study. Finally, a VR application with specifications of seven million polygons, texture of 1.87 billion pixels, and 1920 × 1080 screen resolution, was successfully developed that could run on a PC. For the developed VR applications, both qualitative evaluation by experts and quantitative evaluation by end users was performed.
Integration of a Structure from Motion into Virtual and Augmented Reality for...Tomohiro Fukuda
Proceedings (Full paper reviewed)
Tomohiro Fukuda, Hideki Nada, Haruo Adachi, Shunta Shimizu, Chikako Takei, Yusuke Sato, Nobuyoshi Yabuki, and Ali Motamedi: 2017, Integration of a Structure from Motion into Virtual and Augmented Reality for Architectural and Urban Simulation: Demonstrated in Real Architectural and Urban Projects, Future Trajectories of Computation in Design: 17th International Conference CAAD Futures 2017, p.596, 2017.7
Book (Book contribution)
Tomohiro Fukuda, Hideki Nada, Haruo Adachi, Shunta Shimizu, Chikako Takei, Yusuke Sato, Nobuyoshi Yabuki, and Ali Motamedi: 2017, Integration of a Structure from Motion into Virtual and Augmented Reality for Architectural and Urban Simulation: Demonstrated in Real Architectural and Urban Projects, Computer-Aided Architectural Design - Future Trajectories,pp.60-77,Springer (Communications in Computer and Information Science 724), ISSN 1865-0929,ISBN 978-981-10-5196-8,2017.7
Computational visual simulations are extremely useful and powerful tools for decision-making. The use of virtual and augmented reality (VR/AR) has become a common phenomenon due to real-time and interactive visual simulation tools in architectural and urban design studies and presentations. In this study, a demonstration is performed to integrate structure from motion (SfM) into VR and AR. A 3D modeling method is explored by SfM under real-time rendering as a solution for the modeling cost in large-scale VR. The study examines the application of camera parameters of SfM to realize an appropriate registration and tracking accuracy in marker-less AR to visualize full-scale design projects on a planned construction site. The proposed approach is applied to plural real architectural and urban design projects, and results indicate the feasibility and effectiveness of the proposed approach.
CAADRIA2014: A Synchronous Distributed Design Study Meeting Process with Anno...Tomohiro Fukuda
This research investigated the impact of synchronous distributed non immersive cloud-VR (cloud computing type of Virtual Reality) meetings using the annotation function by noting an architectural design process. The experimentation of collaborative design work at the early stage of a housing renovation project was executed by three designers. The synchronously distributed meetings using cloud-VR and a freehand sketching function were completed in two days. The annotation function was used effectively when a designer wished to show the space composition and volume shape of the planned building and so on. The proposed design environment, sharing a 3D virtual space with viewpoints, plans, sketches and other information synchronously and remotely, was feasible and effective.
GOAR: GIS Oriented Mobile Augmented Reality for Urban Landscape AssessmentTomohiro Fukuda
This slide is presented in CMC2012 (2012 4th International Conference on
Communications, Mobility, and Computing).
Abstract. This research presents the development of a mobile AR system which realizes geometric consistency
using GIS, a gyroscope and a video camera which are mounted in a smartphone for urban landscape assessment. A low cost AR system with high flexibility is developed.
Geometric consistency between a video image and 3DCG are verified. In conclusion, the proposed system was evaluated as feasible and effective.
Availability of Mobile Augmented Reality System for Urban Landscape SimulationTomohiro Fukuda
This slide is presented in CDVE2012 (The 9th International Conference on Cooperative Design, Visualization, and Engineering).
Abstract. This research presents the availability of a landscape simulation method for a mobile AR (Augmented Reality), comparing it with photo montage and VR (Virtual Reality) which are the main existing methods. After a pilot experiment with 28 subjects in Kobe city, a questionnaire about three landscape simulation methods was implemented. In the results of the questionnaire, the mobile AR method was well evaluated for reproducibility of a landscape, operability, and cost. An evaluation rated as better than equivalent was obtained in comparison with the existing methods. The suitability of mobile augmented reality for landscape simulation was found to be high.
DISTRIBUTED AND SYNCHRONISED VR MEETING USING CLOUD COMPUTING: Availability a...Tomohiro Fukuda
This slide is presented in CAADRIA2012 (The 17th International Conference on Computer Aided Architectural Design Research in Asia).
Abstract. The mobility of people's activities, and cloud computing technologies are becoming advanced in the modern age of information and globalisation. This study describes the availability of discussing spatial design while sharing a 3-dimensional virtual space with stakeholders in a distributed and synchronised environment. First of all, a townscape design support system based on a cloud computing type VR system is constructed. Next, an experiment of a distributed and synchronised discussion of townscape design is executed with subjects who are specialists in the townscape design field. After the experiment, both qualitative mental evaluation and quantitative evaluation were carried out. The conclusions are as follows: 1. Users who use VR frequently and who use videoconferencing consider that the difference with face-to-face discussion is small. 2. A Moiré pattern may occur in a gradation picture. 3. The availability of distributed and synchronised discussions with cloud computing type VR is high.
A STUDY OF VARIATION OF NORMAL OF POLY-GONS CREATED BY POINT CLOUD DATA FOR A...Tomohiro Fukuda
This slide is presented in CAADRIA2011 (The 16th International Conference on Computer Aided Architectural Design Research in Asia).
Abstracts: Acquiring current 3D space data of cities, buildings, and rooms rapidly and in detail has become indispensable. When the point cloud data of an object or space scanned by a 3D laser scanner is converted into polygons, it is an accumulation of small polygons. When object or space is a closed flat plane, it is necessary to merge small polygons to reduce the volume of data, and to convert them into one polygon. When an object or space is a closed flat plane, each normal vector of small polygons theoretically has the same angle. However, in practise, these angles are not the same. Therefore, the purpose of this study is to clarify the variation of the angle of a small polygon group that should become one polygon based on actual data. As a result of experimentation, no small polygons are converted by the point cloud data scanned with the 3D laser scanner even if the group of small polygons is a closed flat plane lying in the same plane. When the standard deviation of the extracted number of polygons is assumed to be less than 100, the variation of the angle of the normal vector is roughly 7 degrees.
CAAD FUTURES 2015: Development of High-definition Virtual Reality for Histo...Tomohiro Fukuda
This slide is our research presentation in the 16th CAAD Futures 2015 Conference, at MASP, Sao Paulo, Brazil.
Keywords: Cultural heritage, digital reconstruction, Virtual Reality, visualization, 3D modeling, presentation.
Abstract: This study shows fundamental data for constructing a high-definition VR application under the theme of a three-dimensional visualization to restore past architecture and cities. It is difficult for widespread architectural and urban objects to be rendered in real-time. Thus, in this study, techniques for improving the level of detail (LOD) and representation of natural objects were studied. A digital reconstruction project of Azuchi Castle and old castle town was targeted as a case study. Finally, a VR application with specifications of seven million polygons, texture of 1.87 billion pixels, and 1920 × 1080 screen resolution, was successfully developed that could run on a PC. For the developed VR applications, both qualitative evaluation by experts and quantitative evaluation by end users was performed.
Integration of a Structure from Motion into Virtual and Augmented Reality for...Tomohiro Fukuda
Proceedings (Full paper reviewed)
Tomohiro Fukuda, Hideki Nada, Haruo Adachi, Shunta Shimizu, Chikako Takei, Yusuke Sato, Nobuyoshi Yabuki, and Ali Motamedi: 2017, Integration of a Structure from Motion into Virtual and Augmented Reality for Architectural and Urban Simulation: Demonstrated in Real Architectural and Urban Projects, Future Trajectories of Computation in Design: 17th International Conference CAAD Futures 2017, p.596, 2017.7
Book (Book contribution)
Tomohiro Fukuda, Hideki Nada, Haruo Adachi, Shunta Shimizu, Chikako Takei, Yusuke Sato, Nobuyoshi Yabuki, and Ali Motamedi: 2017, Integration of a Structure from Motion into Virtual and Augmented Reality for Architectural and Urban Simulation: Demonstrated in Real Architectural and Urban Projects, Computer-Aided Architectural Design - Future Trajectories,pp.60-77,Springer (Communications in Computer and Information Science 724), ISSN 1865-0929,ISBN 978-981-10-5196-8,2017.7
Computational visual simulations are extremely useful and powerful tools for decision-making. The use of virtual and augmented reality (VR/AR) has become a common phenomenon due to real-time and interactive visual simulation tools in architectural and urban design studies and presentations. In this study, a demonstration is performed to integrate structure from motion (SfM) into VR and AR. A 3D modeling method is explored by SfM under real-time rendering as a solution for the modeling cost in large-scale VR. The study examines the application of camera parameters of SfM to realize an appropriate registration and tracking accuracy in marker-less AR to visualize full-scale design projects on a planned construction site. The proposed approach is applied to plural real architectural and urban design projects, and results indicate the feasibility and effectiveness of the proposed approach.
CAADRIA2014: A Synchronous Distributed Design Study Meeting Process with Anno...Tomohiro Fukuda
This research investigated the impact of synchronous distributed non immersive cloud-VR (cloud computing type of Virtual Reality) meetings using the annotation function by noting an architectural design process. The experimentation of collaborative design work at the early stage of a housing renovation project was executed by three designers. The synchronously distributed meetings using cloud-VR and a freehand sketching function were completed in two days. The annotation function was used effectively when a designer wished to show the space composition and volume shape of the planned building and so on. The proposed design environment, sharing a 3D virtual space with viewpoints, plans, sketches and other information synchronously and remotely, was feasible and effective.
This report describes an approach of using a “Master-Slave” network communication mechanism by using Java Bindings for OpenGL API (Jogl), based on a real implementation, to achieve the effect of displaying 3D object within multiple screens.
Markerless motion capture for 3D human model animation using depth cameraTELKOMNIKA JOURNAL
3D animation is created using keyframe based system in 3D animation software such as Blender and Maya. Due to the long time interval and the need of high expertise in 3D animation, motion capture devices were used as an alternative and Microsoft Kinect v2 sensor is one of them. This research analyses the capabilities of the Kinect sensor in producing 3D human model animations using motion capture and keyframe based animation system in reference to a live motion performance. The quality, time interval and cost of both animation results were compared. The experimental result shows that motion capture system with Kinect sensor consumed less time (only 2.6%) and cost (30%) in the long run (10 minutes of animation) compare to keyframe-based system, but it produced lower quality animation. This was due to the lack of body detection accuracy when there is obstruction. Moreover, the sensor’s constant assumption that the performer’s body faces forward made it unreliable to be used for a wide variety of movements. Furthermore, standard test defined in this research covers most body parts’ movements to evaluate other motion capture system.
Hardware Implementation of Genetic Algorithm Based Digital Colour Image Water...IDES Editor
The objective of this work is to develop a
hardware-based watermarking system to identify the device
using which the photograph was taken. The watermark chip
will be fit in any electronic component that acquires the
images, which are then watermarked in real time while
capturing along with separate key. Watermarking is the
process of embedding the watermark, in which a watermark
is inserted in to a host image while extracting the watermark
the watermark is pulled out of the image. The ultimate
objective of the research presented in this paper is to develop
low-power, high-performance, real-time, reliable and secure
watermarking systems, which can be achieved through
hardware implementations. In this paper the development of
a very Large Scale Integration (VLSI) architecture for a highperformance
watermarking chip that can perform invisible
colour image watermarking using genetic algorithm is
discussed. The prototyped VLSI implementation of
watermarking is analyzed in two ways.
Viz.,(i) Digital watermarking
Invited talk on AR/SLAM and IoT in ILAS Seminar :Introduction to IoT and
Security, Kyoto University, 2020.
(https://www.z.k.kyoto-u.ac.jp/freshman-guide/ilas-seminars/ )
◆登壇者: Tomoyuki Mukasa
Interactive full body motion capture using infrared sensor networkijcga
Traditional motion capture (mocap) has been
well
-
stud
ied in visual science for
the last decades
. However
the fie
ld is mostly about capturing
precise animation to be used in
specific
application
s
after
intensive
post
processing such as studying biomechanics or rigging models in movies. These data set
s are normally
captured in complex laboratory environments with
sophisticated
equipment thus making motion capture a
field that is mostly exclusive to professional animators.
In
addition
, obtrusive sensors must be attached to
actors and calibrated within t
he capturing system, resulting in limited and unnatural motion.
In recent year
the rise of computer vision and interactive entertainment opened the gate for a different type of motion
capture which focuses on producing
optical
marker
less
or mechanical sens
orless
motion capture.
Furtherm
ore a wide array of low
-
cost
device are released that are easy to use
for less mission critical
applications
.
This paper
describe
s
a new technique of using multiple infrared devices to process data from
multiple infrared sensors to enhance the flexibility and accuracy of the markerless mocap
using commodity
devices such as Kinect
. The method involves analyzing each individual sensor
data, decompose and rebuild
them into a uniformed skeleton across all sensors. We then assign criteria to define the confidence level of
captured signal from
sensor. Each sensor operates on its own process and communicates through MPI.
Our method emphasize
s on the need of minimum calculation overhead for better real time performance
while being able to maintain good scalability
Interactive Full-Body Motion Capture Using Infrared Sensor Network ijcga
Traditional motion capture (mocap) has been well-studied in visual science for the last decades. However the field is mostly about capturing precise animation to be used in specific applications after intensive post processing such as studying biomechanics or rigging models in movies. These data sets are normally captured in complex laboratory environments with sophisticated equipment thus making motion capture a
field that is mostly exclusive to professional animators. In addition, obtrusive sensors must be attached to actors and calibrated within the capturing system, resulting in limited and unnatural motion. In recent year the rise of computer vision and interactive entertainment opened the gate for a different type of motion capture which focuses on producing optical markerless or mechanical sensorless motion capture. Furthermore a wide array of low-cost device are released that are easy to use for less mission critical applications. This paper describes a new technique of using multiple infrared devices to process data from multiple infrared sensors to enhance the flexibility and accuracy of the markerless mocap using commodity
devices such as Kinect. The method involves analyzing each individual sensor data, decompose and rebuild
them into a uniformed skeleton across all sensors. We then assign criteria to define the confidence level of
captured signal from sensor. Each sensor operates on its own process and communicates through MPI.
Our method emphasizes on the need of minimum calculation overhead for better real time performance
while being able to maintain good scalability.
Digital 3D imaging can benefit from advances in VLSI technology in order to accelerate its deployment in many fields like visual communication and industrial automation. High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated. Intelligent digitizers will be capable of measuring accurately and simultaneously color and 3D.
This is a modified version of a presentation given to high school students about understanding their digital reputations and identities online. It includes practical tips and guides from Erik Qualman's book, What Happens On Campus Stays On YouTube. A book to which I was a contributing author. Available on Amazon: http://www.amazon.com/gp/product/0991183525/ref=as_li_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=0991183525&linkCode=as2&tag=paulgordonbro-20&linkId=VEIE5AKM4DCK7MW2
Despite the myth of "digital natives," most of my students have very little experience using technology as anything more than a consumer device. It doesn't have to be this way. By using the design thinking cycle, teachers can foster creative thinking in every content area.
A New Business Model of Custom Software Development For Agile Software Develo...Tsuyoshi Ushio
Successful business model of custom software for agile development.
22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering
November 16-21, 2014, Hong Kong
This report describes an approach of using a “Master-Slave” network communication mechanism by using Java Bindings for OpenGL API (Jogl), based on a real implementation, to achieve the effect of displaying 3D object within multiple screens.
Markerless motion capture for 3D human model animation using depth cameraTELKOMNIKA JOURNAL
3D animation is created using keyframe based system in 3D animation software such as Blender and Maya. Due to the long time interval and the need of high expertise in 3D animation, motion capture devices were used as an alternative and Microsoft Kinect v2 sensor is one of them. This research analyses the capabilities of the Kinect sensor in producing 3D human model animations using motion capture and keyframe based animation system in reference to a live motion performance. The quality, time interval and cost of both animation results were compared. The experimental result shows that motion capture system with Kinect sensor consumed less time (only 2.6%) and cost (30%) in the long run (10 minutes of animation) compare to keyframe-based system, but it produced lower quality animation. This was due to the lack of body detection accuracy when there is obstruction. Moreover, the sensor’s constant assumption that the performer’s body faces forward made it unreliable to be used for a wide variety of movements. Furthermore, standard test defined in this research covers most body parts’ movements to evaluate other motion capture system.
Hardware Implementation of Genetic Algorithm Based Digital Colour Image Water...IDES Editor
The objective of this work is to develop a
hardware-based watermarking system to identify the device
using which the photograph was taken. The watermark chip
will be fit in any electronic component that acquires the
images, which are then watermarked in real time while
capturing along with separate key. Watermarking is the
process of embedding the watermark, in which a watermark
is inserted in to a host image while extracting the watermark
the watermark is pulled out of the image. The ultimate
objective of the research presented in this paper is to develop
low-power, high-performance, real-time, reliable and secure
watermarking systems, which can be achieved through
hardware implementations. In this paper the development of
a very Large Scale Integration (VLSI) architecture for a highperformance
watermarking chip that can perform invisible
colour image watermarking using genetic algorithm is
discussed. The prototyped VLSI implementation of
watermarking is analyzed in two ways.
Viz.,(i) Digital watermarking
Invited talk on AR/SLAM and IoT in ILAS Seminar :Introduction to IoT and
Security, Kyoto University, 2020.
(https://www.z.k.kyoto-u.ac.jp/freshman-guide/ilas-seminars/ )
◆登壇者: Tomoyuki Mukasa
Interactive full body motion capture using infrared sensor networkijcga
Traditional motion capture (mocap) has been
well
-
stud
ied in visual science for
the last decades
. However
the fie
ld is mostly about capturing
precise animation to be used in
specific
application
s
after
intensive
post
processing such as studying biomechanics or rigging models in movies. These data set
s are normally
captured in complex laboratory environments with
sophisticated
equipment thus making motion capture a
field that is mostly exclusive to professional animators.
In
addition
, obtrusive sensors must be attached to
actors and calibrated within t
he capturing system, resulting in limited and unnatural motion.
In recent year
the rise of computer vision and interactive entertainment opened the gate for a different type of motion
capture which focuses on producing
optical
marker
less
or mechanical sens
orless
motion capture.
Furtherm
ore a wide array of low
-
cost
device are released that are easy to use
for less mission critical
applications
.
This paper
describe
s
a new technique of using multiple infrared devices to process data from
multiple infrared sensors to enhance the flexibility and accuracy of the markerless mocap
using commodity
devices such as Kinect
. The method involves analyzing each individual sensor
data, decompose and rebuild
them into a uniformed skeleton across all sensors. We then assign criteria to define the confidence level of
captured signal from
sensor. Each sensor operates on its own process and communicates through MPI.
Our method emphasize
s on the need of minimum calculation overhead for better real time performance
while being able to maintain good scalability
Interactive Full-Body Motion Capture Using Infrared Sensor Network ijcga
Traditional motion capture (mocap) has been well-studied in visual science for the last decades. However the field is mostly about capturing precise animation to be used in specific applications after intensive post processing such as studying biomechanics or rigging models in movies. These data sets are normally captured in complex laboratory environments with sophisticated equipment thus making motion capture a
field that is mostly exclusive to professional animators. In addition, obtrusive sensors must be attached to actors and calibrated within the capturing system, resulting in limited and unnatural motion. In recent year the rise of computer vision and interactive entertainment opened the gate for a different type of motion capture which focuses on producing optical markerless or mechanical sensorless motion capture. Furthermore a wide array of low-cost device are released that are easy to use for less mission critical applications. This paper describes a new technique of using multiple infrared devices to process data from multiple infrared sensors to enhance the flexibility and accuracy of the markerless mocap using commodity
devices such as Kinect. The method involves analyzing each individual sensor data, decompose and rebuild
them into a uniformed skeleton across all sensors. We then assign criteria to define the confidence level of
captured signal from sensor. Each sensor operates on its own process and communicates through MPI.
Our method emphasizes on the need of minimum calculation overhead for better real time performance
while being able to maintain good scalability.
Digital 3D imaging can benefit from advances in VLSI technology in order to accelerate its deployment in many fields like visual communication and industrial automation. High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated. Intelligent digitizers will be capable of measuring accurately and simultaneously color and 3D.
This is a modified version of a presentation given to high school students about understanding their digital reputations and identities online. It includes practical tips and guides from Erik Qualman's book, What Happens On Campus Stays On YouTube. A book to which I was a contributing author. Available on Amazon: http://www.amazon.com/gp/product/0991183525/ref=as_li_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=0991183525&linkCode=as2&tag=paulgordonbro-20&linkId=VEIE5AKM4DCK7MW2
Despite the myth of "digital natives," most of my students have very little experience using technology as anything more than a consumer device. It doesn't have to be this way. By using the design thinking cycle, teachers can foster creative thinking in every content area.
A New Business Model of Custom Software Development For Agile Software Develo...Tsuyoshi Ushio
Successful business model of custom software for agile development.
22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering
November 16-21, 2014, Hong Kong
This presentation on sleep hacking provides an overview of some of the variables that affect sleep. Understanding these variables provides insight into how to optimize your sleep so you can achieve a better sleep. I tried to include some less obvious sleep hacks as a precursor to my class: Sleep Hacking - How to Dominate Your Sleep in Less than A Week
My recent presentation from the East Midlands Learning Technology Winter 2015 meeting discussing and highlighting the power of Digital Assessment for teachers, students and schools.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/qualcomm/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-mangen
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Michael Mangen, Product Manager for Camera and Computer Vision at Qualcomm, presents the "High-resolution 3D Reconstruction on a Mobile Processor" tutorial at the May 2016 Embedded Vision Summit.
Computer vision has come a long way. Use cases that were previously not possible in mass-market devices are now more accessible thanks to advances in depth sensors and mobile processors. In this presentation, Mangen provides an overview of how we are able to implement high-resolution 3D reconstruction – a capability typically requiring cloud/server processing – on a mobile processor. This is an exciting example of how new sensor technology and advanced mobile processors are bringing computer vision capabilities to broader markets.
Image Fusion of Video Images and Geo-localization for UAV ApplicationsIDES Editor
We present in this paper a very fine method for
determining the location of a ground based target when viewed
from an Unmanned Aerial Vehicle (UAV). By determining the
pixel coordinates on the video frame and by using a range
finder the target’s geo-location is determined in the North-
East-Down (NED) frame. The contribution of this method is
that the target can be localized to within 9m when view from
an altitude of 2500m and down to 1m from an altitude of 100m.
This method offers a highly versatile tracking and geolocalisation
technique that has very good number of
advantages over the previously suggested methods. Some of
the key factors that differentiate our method from its
predecessors are:
1) Day and night time operation
2) All weather operation
3) Highly accurate positioning of target in terms of
latitude-longitude (GPS) and altitude.
4) Automatic gimbaled operation of the camera once
target is locked
5) Tracking is possible even when the target stops
moving
6) Independent of target (moving or stationary)
7) No terrain database is required
8) Instantaneous target geolocalisation is possible
A Fast Single-Pixel Laser Imager for VR/AR Headset TrackingPing Hsu
In this work we demonstrate a highly flexible laser imaging system for 3D sensing applications such as in tracking of VR/AR headsets, hands and gestures. The system uses a MEMS mirror scan module to transmit low power laser pulses over programmable areas within a field of view and uses a single photodiode to measure the reflected light...
Pose estimation algorithm for mobile augmented reality based on inertial sen...IJECEIAES
Augmented reality (AR) applications have become increasingly ubiquitous as it integrates virtual information such as images, 3D objects, video, and more to the real world, which further enhances the real environment. Many researchers have investigated the augmentation of the 3D object on the digital screen. However, certain loopholes exist in the existing system while estimating the object’s pose, making it inaccurate for mobile augmented reality (MAR) applications. Objects augmented in the current system have much jitter due to frame illumination changes, affecting the accuracy of vision-based pose estimation. This paper proposes to estimate the pose of an object by blending both vision-based techniques and micro electrical mechanical system (MEMS) sensor (gyroscope) to minimize the jitter problem in MAR. The algorithm used for feature detection and description is oriented FAST rotated BRIEF (ORB), whereas to evaluate the homography for pose estimation, random sample consensus (RANSAC) is used. Furthermore, gyroscope sensor data is incorporated with the vision-based pose estimation. We evaluated the performance of augmenting the 3D object using the techniques, vision-based, and incorporating the sensor data using the video data. After extensive experiments, the validity of the proposed method was superior to the existing vision-based pose estimation algorithms.
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...CSCJournals
Augmented reality has been a topic of intense research for several years for many applications. It consists of inserting a virtual object into a real scene. The virtual object must be accurately positioned in a desired place. Some measurements (calibration) are thus required and a set of correspondences between points on the calibration target and the camera images must be found. In this paper, we present a tracking technique based on both detection of Chessboard corners and a least squares method; the objective is to estimate the perspective transformation matrix for the current view of the camera. This technique does not require any information or computation of the camera parameters; it can used in real time without any initialization and the user can change the camera focal without any fear of losing alignment between real and virtual object.
A Hardware Model to Measure Motion Estimation with Bit Plane Matching AlgorithmTELKOMNIKA JOURNAL
The multistep approach involving combination of techniques is referred as motion estimation.
The proposed approach is an adaptive control system to measure the motion from starting point to limit of
search. The motion patterns are used to analyze and avoid stationary regions of image. The algorithm
proposed is robust efficient and the calculations justify its advantages. The motivation of the work is to
maximize the encoding speed and visual quality with the help of motion vector algorithm. In this work a
hardware model is developed in which a frame of pictures are captured and sent via serial port to the system.
MATLAB simulation tool is used to detect the motion among the picture frame. Once any motion is detected
that signal is sent to the hardware which will give the appropriate sign accordingly. This system is developed
on two platforms (hardware as well software) to estimate and measure the motion vectors
3D perception is crucial for understanding the real world. It offers many benefits and new capabilities over 2D across diverse applications, from XR and autonomous driving to IOT, camera, and mobile. 3D perception with machine learning is creating the new state of the art (SOTA) in areas, such as depth estimation, object detection, and neural scene representation. Making these SOTA neural networks feasible for real-world deployment on mobile devices constrained by power, thermal, and performance has been a challenge. Qualcomm AI Research has developed not only novel AI techniques for 3D perception but also full-stack AI optimizations to enable real-world deployments and energy-efficient solutions. This presentation explores the latest research that is enabling efficient 3D perception while maintaining neural network model accuracy. You’ll learn about:
- The advantages of 3D perception over 2D and the need for 3D perception across applications
- Advancements in 3D perception research by Qualcomm AI Research
- Our future 3D perception research directions
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2022/06/tools-for-creating-next-gen-computer-vision-apps-on-snapdragon-a-presentation-from-qualcomm/
Judd Heape, Vice President of Product Management for Camera, Computer Vision and Video Technology at Qualcomm, presents the “Tools for Creating Next-Gen Computer Vision Apps on Snapdragon” tutorial at the May 2022 Embedded Vision Summit.
The Snapdragon Mobile Platform powers the world’s best smartphones, XR headsets, PCs, wearables, cars and IoT products. Thanks to Snapdragon, these products feature powerful computer vision technologies that you can tap into to build next-gen apps. Inside Snapdragon is a hardware engine dedicated to computer vision–the Engine for Visual Analytics (EVA). EVA hardware acceleration gives developers access to high-performance, low-power computer vision functions to enhance apps that rely on advanced camera or video processing.
The EVA includes a motion processing unit, a feature descriptor unit, a depth estimation unit, a geometric correction unit and an object detection unit. These blocks power high-level functions such as electronic image stabilization, multi-frame HDR, face detection and real-time bokeh. In this presentation, Heape does a deep-dive into EVA’s Software Developer Kit (SDK) and available APIs, such as Optical Flow and Depth from Stereo, and explores how these features can be integrated into your apps.
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...csandit
In today’s technological life, everyone is quite familiar with the importance of security
measures in our lives. So in this regard, many attempts have been made by researchers and one
of them is flying robots technology. One well-known usage of flying robot, perhaps, is its
capability in security and care measurements which made this device extremely practical, not
only for its unmanned movement, but also for the unique manoeuvre during flight over the
arbitrary areas. In this research, the automatic landing of a flying robot is discussed. The
system is based on the frequent interruptions that is sent from main microcontroller to camera
module in order to take images; these images have been distinguished by image processing
system based on edge detection, after analysing the image the system can tell whether or not to
land on the ground. This method shows better performance in terms of precision as well as
experimentally.
Similar to SOAR: SENSOR ORIENTED MOBILE AUGMENTED REALITY FOR URBAN LANDSCAPE ASSESSMENT (20)
DEVELOPMENT OF USE FLOW OF 3DCAD/VR SOFTWARE FOR CITIZENS WHO ARE NON-SPECIAL...Tomohiro Fukuda
This slide is presented in CAADRIA2010 (The 13th International Conference on Computer Aided Architectural Design Research in Asia).
The purpose of this study is development of a tool by which citizens who are non-specialists can design a regional revitalization project. Therefore, a 3DCAD/VR (3-Dimensional Computer Aided Design/Virtual Reality) combination system was developed by using SketchUP Pro, GIMP, and UC-win/Road. This system has the advantages of low cost and easy operation. The utility of the system was verified as a result of applying the developed prototype system in the Super Science High School program for high school students created by the Ministry of Education, Culture, Sports, Science and Technology, Japan. It has been used for two years, since 2007. In addition, the characteristics of the VR made by the non-specialists were considered.
Citizen Participatory Design Method Using VR and BlogTomohiro Fukuda
This slide is presented in CAADRIA2008 (The 13th International Conference on Computer Aided Architectural Design Research in Asia).
This research concerned the establishment of a citizen participatory design
method using VR and CGM. For this, problems in the citizen participatory design are
addressed, and the continuous study method using VR and a blog is shown. Then, evaluation
is conducted by considering an actual design project as a case study. Furthermore, VR
functions needed through the case study are developed. Using this method, a small patio
on which parasols were permanently and lawfully set up on a road lot was completed.
Development and Evaluation of a Representation Method of 3DCG Pre-Rendering A...Tomohiro Fukuda
This ppt is presented in CAADRIA2008 (The 13th International Conference on Computer Aided Architectural Design Research in Asia).
As a method of dissemination of environmental symbiosis design towards
environmental problem solutions, 3DCG pre-rendering animation (3DCGPRA) which has
a high quality of representation and has a powerful appeal, is expected to be particularly
effective. After arranging components required in an environmental symbiosis design, the
representation targets which needed to be developed were clarified. In addition,
representation methods of shade and shadow, grass, human activity, and symbiosis methods
etc. were developed. In a real project, a 7’ 3DCGPRA was created applying these new
methods, and its validity was evaluated.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Search and Society: Reimagining Information Access for Radical Futures
SOAR: SENSOR ORIENTED MOBILE AUGMENTED REALITY FOR URBAN LANDSCAPE ASSESSMENT
1. CAADRIA2012, Chennai, India
SOAR
SENSOR ORIENTED MOBILE
AUGMENTED REALITY FOR URBAN
LANDSCAPE ASSESSMENT
TOMOHIRO FUKUDA, TIAN ZHANG, AYAKO SHIMIZU,
MASAHARU TAGUCHI, LEI SUN and NOBUYOSHI YABUKI
Division of Sustainable Energy and Environmental Engineering,
Graduate School of Engineering,
Osaka University, Japan
2. Contents
1. Introduction
2. System Development
1. Development Environment of a System
2. System Flow
3. Verification of System
1. Consistency with the viewing angle of a video camera
and CG virtual camera
2. Accuracy of geometric consistency with a video image
and 3DCG
4. Conclusion
2
3. Contents
1. Introduction
2. System Development
1. Development Environment of a System
2. System Flow
3. Verification of System
1. Consistency with the viewing angle of a video camera
and CG virtual camera
2. Accuracy of geometric consistency with a video image
and 3DCG
4. Conclusion
3
4. 1.1 Motivation -1 1. Introduction
In recent years, the need for landscape simulation has been
growing. A review meeting of future landscape is carried out on a
planned construction site in addition to being carried out in a
conference room.
It is difficult for stakeholders to imagine concretely such an image
that is three-dimensional and does not exist. A landscape
visualization method using Computer Graphics (CG) and Virtual
Reality (VR) has been developed.
However, this method requires much time and expense to make a
3D model. Moreover, since consistency with real space is not
achieved when using VR on a planned construction site, it has the
problem that a reviewer cannot get an immersive experience.
4
A landscape study on site VR caputure of Kobe city
5. 1.1 Motivation -2 1. Introduction
In this research, the authors focus Augmented Reality (AR) which
can superimpose an actual landscape acquired with a video
camera and 3DCG. When AR is used, a landscape assessment
object will be included in the present surroundings. Thereby, a
drastic reduction of the time and expense involved in carrying out
3DCG modeling of the present surroundings can be expected.
A smartphone is widely available on the market level.
Sekai Camera Web Smartphone Market in Japan
http://sekaicamera.com/ 5
7. 1.2 Previous Study 1. Introduction
2. Use of an artificial marker. Since an artificial marker needs to be always
visible by the AR camera, the movable span of a user is limited. Moreover,
to realize high precision, it is necessary to use a large artificial marker.
Yabuki, N., et al.: 2011, An invisible height evaluation
system for building height regulation to preserve good
landscapes using augmented reality, Automation in
Construction, Volume 20, Issue 3, 228-235.
artificial marker
7
8. 1.3 Aim 1. Introduction
In this research, SOAR (Sensor Oriented Mobile AR) system which
realizes geometric consistency using GPS, a gyroscope and a
video camera which are mounted in a smartphone is developed.
A low cost AR system with high flexibility is realizable.
9
9. Contents
1. Introduction
2. System Development
1. Development Environment of a System
2. System Flow
3. Verification of System
1. Consistency with the viewing angle of a video camera
and CG virtual camera
2. Accuracy of geometric consistency with a video image
and 3DCG
4. Conclusion
10
10. 2. System Development
2.1 Development Environment Of a System
Standard Spec Smartphone: GALAPAGOS 003SH (Softbank Mobile Corp.)
Development Language: OpenGL-ES(Ver.2.0),Java(Ver.1.6)
Development Environment: Eclipse Galileo(Ver.3.5)
Location Estimation Technology: A-GPS (Assisted Global Positioning System)
Video Camera
Spec of 003SH
OS Android™ 2.2
Qualcomm®MSM8255
CPU
Snapdragon® 1GHz
ROM:1GB
Memory
RAM:512MB
Weight ≒140g
Size ≒W62×H121×D12mm
Display Size 3.8 inch
Resolution 480×800 pixel
003SH
11
11. 2.2 System Flow -1 2. System Development
While the CG model realizes
Calibration of a video camera
ideal rendering by the
Definition of landscape assessment 3DCG model perspective drawing method,
rendering of a video camera
Activation of AR system produces distortion.
Selection of 3DCG model
Activation of Activation of
Activation of GPS gyroscope video camera
Position Angle information
information Capture of live
acquisition video image Distortion Calibration
acquisition
Definition of position and angle
information on CG virtual camera
Superposition to live video image and 3DCG model
Display of AR image
Save of AR image
Calibration of the video camera
12
using Android NDK-OpenCV
12. 2.2 System Flow -2 2. System Development
3DCG Model
Calibration of a video camera
Definition of landscape assessment 3DCG model
Geometry, Texture, Unit
Activation of AR system
Selection of 3DCG model
3DCG model allocation file
Activation of Activation of
Activation of GPS gyroscope video camera
Position Angle information
information Capture of live
acquisition video image 3DCG model name, File name,
acquisition
Position data (longitude, latitude,
orthometric height), Degree of
Definition of position and angle
information on CG virtual camera rotation angle, and Zone
number of the rectangular plane
Superposition to live video image and 3DCG model
Display of AR image 3DCG model arrangement information file
Save of AR image
Number of the 3DCG model
allocation information file, 13
Each name
13. 2.2 System Flow -3 2. System Development
Calibration of a video camera
Definition of landscape assessment 3DCG model
Activation of AR system
Selection of 3DCG model
Activation of Activation of
Activation of GPS gyroscope video camera
Position Angle information
information Capture of live
acquisition video image
acquisition
Definition of position and angle
information on CG virtual camera
Superposition to live video image and 3DCG model
Display of AR image
Save of AR image
GUI of the Developed System
14
14. 2.2 System Flow -4 2. System Development
Calibration of a video camera
Definition of landscape assessment 3DCG model
yaw
Activation of AR system
Selection of 3DCG model
Activation of Activation of
Activation of GPS gyroscope video camera
Position Angle information
information Capture of live
acquisition video image roll
acquisition
pitch
Definition of position and angle
information on CG virtual camera
Superposition to live video image and 3DCG model
Coordinate System of Developed
AR system
Display of AR image
Save of AR image
15
15. 2.2 System Flow -4 2. System Development
Calibration of a video camera Position data is converted into
the coordinates (x, y) of a
Definition of landscape assessment 3DCG model
rectangular plane. Orthometric
Activation of AR system
height is created by subtracting
the geoid height from the
Selection of 3DCG model ellipsoidal height.
Activation of Activation of
Activation of GPS gyroscope video camera
Position Angle information
information Capture of live
acquisition video image
acquisition
Definition of position and angle
information on CG virtual camera
Superposition to live video image and 3DCG model
Angle value of yaw points out
Display of AR image
magnetic north. In an AR
Save of AR image system, in order to use a true
north value, a magnetic
declination is acquired and
corrected.
16
16. 2.2 System Flow -5 2. System Development
Calibration of a video camera
Definition of landscape assessment 3DCG model
Activation of AR system
Selection of 3DCG model
Activation of Activation of
Activation of GPS gyroscope video camera
Position Angle information
information Capture of live
acquisition video image
acquisition
Definition of position and angle
information on CG virtual camera
Superposition to live video image and 3DCG model
Display of AR image
Save of AR image
17
18. Contents
1. Introduction
2. System Development
1. Development Environment of a System
2. System Flow
3. Verification of System
1. Consistency with the viewing angle of a video camera
and CG virtual camera
2. Accuracy of geometric consistency with a video image
and 3DCG
4. Conclusion
19
19. 3. Verification of System
3.2 Accuracy of geometric consistency with a video image
and 3DCG
Known Building Target
▶ GSE Common East Building at Osaka University Suita Campus
▶ W29.6 m, D29.0 m, H67.0 m
Photo Drawing
28.95m 29.6m
28.95m
29.6m
64.8m
64.8m
29.6m
Latitude, Longitude, Orthometric height Outlined 3D Model
34.823026944, 135.520751389, 60.15 24
20. 3. Verification of System
3.2 Accuracy of geometric consistency with a video image
and 3DCG
Known Viewpoint Place
▶ No.14-563 reference point. Distance from the reference point to the center
of the Building was 203 m.
▶ AR system was installed with a tripod at a level height 1.5m.
Building Target
BC
A D
203m
Viewpoint (No.14-563 Reference Point)
Latitude, Longitude, Orthometric height Measuring Points of
34.82145699, 135.519612, 53.1 Residual Error 25
21. 3. Verification of System
3.2 Accuracy of geometric consistency with a video image
and 3DCG
Parameter Settings of Eight Experiments
Parameter Settings
(S: Static Value = Known value, D: Dynamic Value = Acquired value from a device )
Position Information of Angle Information of
Experiment CG Virtual Camera CG Virtual Camera
Latitude Longitude Altitude yaw pitch roll
No.1 S S S S S S
No.2 D D D D D D
No.3 D S S S S S
No.4 S D S S S S
No.5 S S D S S S
No.6 S S S D S S
No.7 S S S S D S
No.8 S S S S S D
26
22. 3. Verification of System
3.2 Accuracy of geometric consistency with a video image
and 3DCG
Calculation Procedure of Residual Error
1. Pixel Error: Each difference between the horizontal direction and vertical
direction of four points measured by pixels (Δx, Δy).
⊿x
⊿y Live Image
CG Model
Calculation image of residual error between live video image and CG
2. Distance Error: From the acquired value (Δx, Δy), each difference in the
horizontal direction and vertical direction was computed as a meter unit
by the formula 1 and the formula 2 (ΔX, ΔY).
(1) (2)
W: Actual width of an object (m)
H: Actual height of an object (m)
x: Width of 3DCG model on AR image (px)
y: Height of 3DCG model on AR image (px)
27
23. 3. Verification of System
3.2 Accuracy of geometric consistency with a video image
and 3DCG
Results: No.1 AR image
Position Information of Angle Information of
Experim
CG Virtual Camera CG Virtual Camera
ent Latitude Longitude Altitude yaw pitch roll
No.1 S S S S S S
(0.12m/pixel)
No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8
Pixel Error
Max. Mean Min.
Unit:
Distance Error
Distance Error Unit:
No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8
Unit:
24. 3. Verification of System
3.2 Accuracy of geometric consistency with a video image
and 3DCG
Results: No.2 AR image
Position Information of Angle Information of
Experim
CG Virtual Camera CG Virtual Camera
ent Latitude Longitude Altitude yaw pitch roll
No.2 D D D D D D
(0.12m/pixel)
No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8
Pixel Error
Max. Mean Min.
Unit:
Distance Error
Distance Error Unit:
No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8
Unit:
25. 3. Verification of System
3.2 Accuracy of geometric consistency with a video image
and 3DCG
Results: No.3 AR image
Position Information of Angle Information of
Experim
CG Virtual Camera CG Virtual Camera
ent Latitude Longitude Altitude yaw pitch roll
No.3 D S S S S S
(0.12m/pixel)
No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8
Pixel Error
Max. Mean Min.
Unit:
Distance Error
Distance Error Unit:
No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8
Unit:
26. 3. Verification of System
3.2 Accuracy of geometric consistency with a video image
and 3DCG
Results: No.4 AR image
Position Information of Angle Information of
Experim
CG Virtual Camera CG Virtual Camera
ent Latitude Longitude Altitude yaw pitch roll
No.4 S D S S S S
(0.12m/pixel)
No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8
Pixel Error
Max. Mean Min.
Unit:
Distance Error
Distance Error Unit:
No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8
Unit:
27. 3. Verification of System
3.2 Accuracy of geometric consistency with a video image
and 3DCG
Results: No.5 AR image
Position Information of Angle Information of
Experim
CG Virtual Camera CG Virtual Camera
ent Latitude Longitude Altitude yaw pitch roll
No.5 S S D S S S
(0.12m/pixel)
No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8
Pixel Error
Max. Mean Min.
Unit:
Distance Error
Distance Error Unit:
No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8
Unit:
28. 3. Verification of System
3.2 Accuracy of geometric consistency with a video image
and 3DCG
Results: No.6 AR image
Position Information of Angle Information of
Experim
CG Virtual Camera CG Virtual Camera
ent Latitude Longitude Altitude yaw pitch roll
No.5 S S S D S S
(0.12m/pixel)
No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8
Pixel Error
Max. Mean Min.
Unit:
Distance Error
Distance Error Unit:
No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8
Unit:
29. 3. Verification of System
3.2 Accuracy of geometric consistency with a video image
and 3DCG
Results: No.7 AR image
Position Information of Angle Information of
Experim
CG Virtual Camera CG Virtual Camera
ent Latitude Longitude Altitude yaw pitch roll
No.5 S S S S D S
(0.12m/pixel)
No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8
Pixel Error
Max. Mean Min.
Unit:
Distance Error
Distance Error Unit:
No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8
Unit:
30. 3. Verification of System
3.2 Accuracy of geometric consistency with a video image
and 3DCG
Results: No.8 AR image
Position Information of Angle Information of
Experim
CG Virtual Camera CG Virtual Camera
ent Latitude Longitude Altitude yaw pitch roll
No.5 S S S S S D
(0.12m/pixel)
No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8
Pixel Error
Max. Mean Min.
Unit:
Distance Error
Distance Error Unit:
No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8
Unit:
31. 3. Verification of System
3.2 Accuracy of geometric consistency with a video image
and 3DCG
Results in No.1 (All the known static value used)
▶ Pixel error: less than 1.5 pixels
▶ Mean distance error: less than 0.15m
▶ Accuracy of the AR system was found to be high.
▶ Object 200m away can be evaluated when a known static value is
used.
Results in No.2 (General SOAR)
▶ Pixel error: less than 20 pixels (H), 55 pixels (V)
▶ Mean distance error: less than 2.3m (H), 6.3m (V)
Results through No.2-8 No.1
▶ Although No.2 is all dynamic inputs, the X residual error of No.2
is smaller than the one of No.6. It is the result of offsetting the X
residual error of No.2, the X residual error of No.6 which is +
angle, and the X residual error of No.4 which is - angle.
Max. Mean Min.
Distance Error
Distance Error Unit:
No.2
No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8 36
32. Contents
1. Introduction
2. System Development
1. Development Environment of a System
2. System Flow
3. Verification of System
1. Consistency with the viewing angle of a video camera
and CG virtual camera
2. Accuracy of geometric consistency with a video image
and 3DCG
4. Conclusion
37
33. 4. Conclusion
4.1 Conclusion
When known values are used for position and angle information on CG
virtual camera and the distance was 200 m, the pixel error is less than 1.5
pixels, and the mean distance error is less than 0.15 m. This value is the
tolerance level of landscape assessment.
When dynamic values acquired with GPS and
gyroscope were used, the pixel error was
less than 20 horizontal pixels and 55 vertical
pixels. The mean distance error was 2.3
horizontal meters and 6.3 vertical meters.
The developed SOAR has geometric
consistency using GPS and the gyroscope
with which the smart phone is equipped.
38
34. 4. Conclusion
4.2 Future Work
A future work should attempt to reduce the residual error included in the
dynamic value acquired with GPS and the gyroscope.
It is also necessary to verify accuracy of the residual error to objects
further than 200m away and usability.
39
35. Thank you for your attention!
E-mail: fukuda@see.eng.osaka-u.ac.jp
Twitter: fukudatweet
Facebook: Tomohiro Fukuda
Linkedin: Tomohiro Fukuda