This research investigated the impact of synchronous distributed non immersive cloud-VR (cloud computing type of Virtual Reality) meetings using the annotation function by noting an architectural design process. The experimentation of collaborative design work at the early stage of a housing renovation project was executed by three designers. The synchronously distributed meetings using cloud-VR and a freehand sketching function were completed in two days. The annotation function was used effectively when a designer wished to show the space composition and volume shape of the planned building and so on. The proposed design environment, sharing a 3D virtual space with viewpoints, plans, sketches and other information synchronously and remotely, was feasible and effective.
Integration of a Structure from Motion into Virtual and Augmented Reality for...Tomohiro Fukuda
Proceedings (Full paper reviewed)
Tomohiro Fukuda, Hideki Nada, Haruo Adachi, Shunta Shimizu, Chikako Takei, Yusuke Sato, Nobuyoshi Yabuki, and Ali Motamedi: 2017, Integration of a Structure from Motion into Virtual and Augmented Reality for Architectural and Urban Simulation: Demonstrated in Real Architectural and Urban Projects, Future Trajectories of Computation in Design: 17th International Conference CAAD Futures 2017, p.596, 2017.7
Book (Book contribution)
Tomohiro Fukuda, Hideki Nada, Haruo Adachi, Shunta Shimizu, Chikako Takei, Yusuke Sato, Nobuyoshi Yabuki, and Ali Motamedi: 2017, Integration of a Structure from Motion into Virtual and Augmented Reality for Architectural and Urban Simulation: Demonstrated in Real Architectural and Urban Projects, Computer-Aided Architectural Design - Future Trajectories,pp.60-77,Springer (Communications in Computer and Information Science 724), ISSN 1865-0929,ISBN 978-981-10-5196-8,2017.7
Computational visual simulations are extremely useful and powerful tools for decision-making. The use of virtual and augmented reality (VR/AR) has become a common phenomenon due to real-time and interactive visual simulation tools in architectural and urban design studies and presentations. In this study, a demonstration is performed to integrate structure from motion (SfM) into VR and AR. A 3D modeling method is explored by SfM under real-time rendering as a solution for the modeling cost in large-scale VR. The study examines the application of camera parameters of SfM to realize an appropriate registration and tracking accuracy in marker-less AR to visualize full-scale design projects on a planned construction site. The proposed approach is applied to plural real architectural and urban design projects, and results indicate the feasibility and effectiveness of the proposed approach.
CAAD FUTURES 2015: Development of High-definition Virtual Reality for Histo...Tomohiro Fukuda
This document describes the development of a high-definition virtual reality application reconstructing Azuchi Castle and its old town in 1581. Researchers created 3D models with over 7 million polygons and texture maps with over 1.8 billion pixels. Level of detail techniques and procedural modeling were used to render the large-scale environment in real-time. Qualitative feedback from VR experts noted realistic details but inconsistencies in shading and shadows. A survey of 286 end-users found 88% rated the VR experience as good or very good and 89% found it very or interesting. The project aims to use this VR system for tourism, education and civic pride in the local area.
DISTRIBUTED AND SYNCHRONISED VR MEETING USING CLOUD COMPUTING: Availability a...Tomohiro Fukuda
This slide is presented in CAADRIA2012 (The 17th International Conference on Computer Aided Architectural Design Research in Asia).
Abstract. The mobility of people's activities, and cloud computing technologies are becoming advanced in the modern age of information and globalisation. This study describes the availability of discussing spatial design while sharing a 3-dimensional virtual space with stakeholders in a distributed and synchronised environment. First of all, a townscape design support system based on a cloud computing type VR system is constructed. Next, an experiment of a distributed and synchronised discussion of townscape design is executed with subjects who are specialists in the townscape design field. After the experiment, both qualitative mental evaluation and quantitative evaluation were carried out. The conclusions are as follows: 1. Users who use VR frequently and who use videoconferencing consider that the difference with face-to-face discussion is small. 2. A Moiré pattern may occur in a gradation picture. 3. The availability of distributed and synchronised discussions with cloud computing type VR is high.
A STUDY OF VARIATION OF NORMAL OF POLY-GONS CREATED BY POINT CLOUD DATA FOR A...Tomohiro Fukuda
This slide is presented in CAADRIA2011 (The 16th International Conference on Computer Aided Architectural Design Research in Asia).
Abstracts: Acquiring current 3D space data of cities, buildings, and rooms rapidly and in detail has become indispensable. When the point cloud data of an object or space scanned by a 3D laser scanner is converted into polygons, it is an accumulation of small polygons. When object or space is a closed flat plane, it is necessary to merge small polygons to reduce the volume of data, and to convert them into one polygon. When an object or space is a closed flat plane, each normal vector of small polygons theoretically has the same angle. However, in practise, these angles are not the same. Therefore, the purpose of this study is to clarify the variation of the angle of a small polygon group that should become one polygon based on actual data. As a result of experimentation, no small polygons are converted by the point cloud data scanned with the 3D laser scanner even if the group of small polygons is a closed flat plane lying in the same plane. When the standard deviation of the extracted number of polygons is assumed to be less than 100, the variation of the angle of the normal vector is roughly 7 degrees.
Visual Environment by Semantic Segmentation Using Deep Learning: A Prototype ...Tomohiro Fukuda
This document describes a proposed method for estimating sky view factor (SVF) using semantic segmentation with deep learning networks. Specifically:
- It develops a system using SegNet and U-Net deep learning models to perform pixel-wise semantic segmentation of sky and non-sky areas from images to calculate SVF ratios.
- The system was trained on 300 manually segmented images and tested on 100 fisheye photographs, achieving 98% accuracy in estimating SVF under different sky conditions.
- Future work is needed to apply the system to live video streams rather than static images. The method provides an efficient, high-precision way to estimate important urban environmental metrics like SVF.
Availability of Mobile Augmented Reality System for Urban Landscape SimulationTomohiro Fukuda
This slide is presented in CDVE2012 (The 9th International Conference on Cooperative Design, Visualization, and Engineering).
Abstract. This research presents the availability of a landscape simulation method for a mobile AR (Augmented Reality), comparing it with photo montage and VR (Virtual Reality) which are the main existing methods. After a pilot experiment with 28 subjects in Kobe city, a questionnaire about three landscape simulation methods was implemented. In the results of the questionnaire, the mobile AR method was well evaluated for reproducibility of a landscape, operability, and cost. An evaluation rated as better than equivalent was obtained in comparison with the existing methods. The suitability of mobile augmented reality for landscape simulation was found to be high.
Integration of a Structure from Motion into Virtual and Augmented Reality for...Tomohiro Fukuda
Proceedings (Full paper reviewed)
Tomohiro Fukuda, Hideki Nada, Haruo Adachi, Shunta Shimizu, Chikako Takei, Yusuke Sato, Nobuyoshi Yabuki, and Ali Motamedi: 2017, Integration of a Structure from Motion into Virtual and Augmented Reality for Architectural and Urban Simulation: Demonstrated in Real Architectural and Urban Projects, Future Trajectories of Computation in Design: 17th International Conference CAAD Futures 2017, p.596, 2017.7
Book (Book contribution)
Tomohiro Fukuda, Hideki Nada, Haruo Adachi, Shunta Shimizu, Chikako Takei, Yusuke Sato, Nobuyoshi Yabuki, and Ali Motamedi: 2017, Integration of a Structure from Motion into Virtual and Augmented Reality for Architectural and Urban Simulation: Demonstrated in Real Architectural and Urban Projects, Computer-Aided Architectural Design - Future Trajectories,pp.60-77,Springer (Communications in Computer and Information Science 724), ISSN 1865-0929,ISBN 978-981-10-5196-8,2017.7
Computational visual simulations are extremely useful and powerful tools for decision-making. The use of virtual and augmented reality (VR/AR) has become a common phenomenon due to real-time and interactive visual simulation tools in architectural and urban design studies and presentations. In this study, a demonstration is performed to integrate structure from motion (SfM) into VR and AR. A 3D modeling method is explored by SfM under real-time rendering as a solution for the modeling cost in large-scale VR. The study examines the application of camera parameters of SfM to realize an appropriate registration and tracking accuracy in marker-less AR to visualize full-scale design projects on a planned construction site. The proposed approach is applied to plural real architectural and urban design projects, and results indicate the feasibility and effectiveness of the proposed approach.
CAAD FUTURES 2015: Development of High-definition Virtual Reality for Histo...Tomohiro Fukuda
This document describes the development of a high-definition virtual reality application reconstructing Azuchi Castle and its old town in 1581. Researchers created 3D models with over 7 million polygons and texture maps with over 1.8 billion pixels. Level of detail techniques and procedural modeling were used to render the large-scale environment in real-time. Qualitative feedback from VR experts noted realistic details but inconsistencies in shading and shadows. A survey of 286 end-users found 88% rated the VR experience as good or very good and 89% found it very or interesting. The project aims to use this VR system for tourism, education and civic pride in the local area.
DISTRIBUTED AND SYNCHRONISED VR MEETING USING CLOUD COMPUTING: Availability a...Tomohiro Fukuda
This slide is presented in CAADRIA2012 (The 17th International Conference on Computer Aided Architectural Design Research in Asia).
Abstract. The mobility of people's activities, and cloud computing technologies are becoming advanced in the modern age of information and globalisation. This study describes the availability of discussing spatial design while sharing a 3-dimensional virtual space with stakeholders in a distributed and synchronised environment. First of all, a townscape design support system based on a cloud computing type VR system is constructed. Next, an experiment of a distributed and synchronised discussion of townscape design is executed with subjects who are specialists in the townscape design field. After the experiment, both qualitative mental evaluation and quantitative evaluation were carried out. The conclusions are as follows: 1. Users who use VR frequently and who use videoconferencing consider that the difference with face-to-face discussion is small. 2. A Moiré pattern may occur in a gradation picture. 3. The availability of distributed and synchronised discussions with cloud computing type VR is high.
A STUDY OF VARIATION OF NORMAL OF POLY-GONS CREATED BY POINT CLOUD DATA FOR A...Tomohiro Fukuda
This slide is presented in CAADRIA2011 (The 16th International Conference on Computer Aided Architectural Design Research in Asia).
Abstracts: Acquiring current 3D space data of cities, buildings, and rooms rapidly and in detail has become indispensable. When the point cloud data of an object or space scanned by a 3D laser scanner is converted into polygons, it is an accumulation of small polygons. When object or space is a closed flat plane, it is necessary to merge small polygons to reduce the volume of data, and to convert them into one polygon. When an object or space is a closed flat plane, each normal vector of small polygons theoretically has the same angle. However, in practise, these angles are not the same. Therefore, the purpose of this study is to clarify the variation of the angle of a small polygon group that should become one polygon based on actual data. As a result of experimentation, no small polygons are converted by the point cloud data scanned with the 3D laser scanner even if the group of small polygons is a closed flat plane lying in the same plane. When the standard deviation of the extracted number of polygons is assumed to be less than 100, the variation of the angle of the normal vector is roughly 7 degrees.
Visual Environment by Semantic Segmentation Using Deep Learning: A Prototype ...Tomohiro Fukuda
This document describes a proposed method for estimating sky view factor (SVF) using semantic segmentation with deep learning networks. Specifically:
- It develops a system using SegNet and U-Net deep learning models to perform pixel-wise semantic segmentation of sky and non-sky areas from images to calculate SVF ratios.
- The system was trained on 300 manually segmented images and tested on 100 fisheye photographs, achieving 98% accuracy in estimating SVF under different sky conditions.
- Future work is needed to apply the system to live video streams rather than static images. The method provides an efficient, high-precision way to estimate important urban environmental metrics like SVF.
Availability of Mobile Augmented Reality System for Urban Landscape SimulationTomohiro Fukuda
This slide is presented in CDVE2012 (The 9th International Conference on Cooperative Design, Visualization, and Engineering).
Abstract. This research presents the availability of a landscape simulation method for a mobile AR (Augmented Reality), comparing it with photo montage and VR (Virtual Reality) which are the main existing methods. After a pilot experiment with 28 subjects in Kobe city, a questionnaire about three landscape simulation methods was implemented. In the results of the questionnaire, the mobile AR method was well evaluated for reproducibility of a landscape, operability, and cost. An evaluation rated as better than equivalent was obtained in comparison with the existing methods. The suitability of mobile augmented reality for landscape simulation was found to be high.
SOAR: SENSOR ORIENTED MOBILE AUGMENTED REALITY FOR URBAN LANDSCAPE ASSESSMENTTomohiro Fukuda
This slide is presented in CAADRIA2012 (The 17th International Conference on Computer Aided Architectural Design Research in Asia).
Abstract. This research presents the development of a sensor oriented mobile AR system which realizes geometric consistency using GPS, a gyroscope and a video camera which are mounted in a smartphone for urban landscape assessment. A low cost AR system with high flexibility is realized. Consistency of the viewing angle of a video camera and a CG virtual camera, and geometric consistency between a video image and 3DCG are verified. In conclusion, the proposed system was evaluated as feasible and effective.
GOAR: GIS Oriented Mobile Augmented Reality for Urban Landscape AssessmentTomohiro Fukuda
This slide is presented in CMC2012 (2012 4th International Conference on
Communications, Mobility, and Computing).
Abstract. This research presents the development of a mobile AR system which realizes geometric consistency
using GIS, a gyroscope and a video camera which are mounted in a smartphone for urban landscape assessment. A low cost AR system with high flexibility is developed.
Geometric consistency between a video image and 3DCG are verified. In conclusion, the proposed system was evaluated as feasible and effective.
Invited talk on AR/SLAM and IoT in ILAS Seminar :Introduction to IoT and
Security, Kyoto University, 2020.
(https://www.z.k.kyoto-u.ac.jp/freshman-guide/ilas-seminars/ )
◆登壇者: Tomoyuki Mukasa
This document summarizes HCChang's research interests and experience in dense visual simultaneous localization and mapping (SLAM). It begins with an overview of monoSLAM, PTAM, FAB-MAP and DTAM as examples of visual SLAM techniques. It then provides more detail on KinectFusion, the seminal dense visual SLAM method, and extensions like InfiniTAM, ElasticFusion and DynamicFusion. The document outlines HCChang's background and current work using time-of-flight cameras at EZImage to improve depth sensing. It proposes future work on dense visual SLAM including deploying to Nvidia's TX1 and TK1 platforms, adding loop closures and path optimization, and reconstruct
Markerless motion capture for 3D human model animation using depth cameraTELKOMNIKA JOURNAL
3D animation is created using keyframe based system in 3D animation software such as Blender and Maya. Due to the long time interval and the need of high expertise in 3D animation, motion capture devices were used as an alternative and Microsoft Kinect v2 sensor is one of them. This research analyses the capabilities of the Kinect sensor in producing 3D human model animations using motion capture and keyframe based animation system in reference to a live motion performance. The quality, time interval and cost of both animation results were compared. The experimental result shows that motion capture system with Kinect sensor consumed less time (only 2.6%) and cost (30%) in the long run (10 minutes of animation) compare to keyframe-based system, but it produced lower quality animation. This was due to the lack of body detection accuracy when there is obstruction. Moreover, the sensor’s constant assumption that the performer’s body faces forward made it unreliable to be used for a wide variety of movements. Furthermore, standard test defined in this research covers most body parts’ movements to evaluate other motion capture system.
This document provides an overview and summary of a presentation on Simultaneous Localization and Mapping (SLAM). It introduces the speaker, Dong-Won Shin, and his background and research in SLAM. The contents of the presentation are then outlined, including an introduction to SLAM, traditional SLAM approaches like Extended Kalman Filter SLAM and FastSLAM, efforts towards large-scale mapping like graph-based SLAM and loop closure detection, modern state-of-the-art systems like ORB SLAM, KinectFusion and Lidar SLAM, and applications of SLAM. Key algorithms in visual odometry, backend optimization, and loop closure detection are also summarized.
2008 brokerage 03 scalable 3 d models [compatibility mode]imec.archive
1) There is a trend towards capturing and modeling massive 3D environments and dynamic 4D scenes for applications like virtual worlds, games, and navigation systems.
2) Acquiring and processing large amounts of 3D data poses challenges for technologies related to acquisition, editing, transmission, rendering and presentation as the scale increases.
3) The document discusses various methods for large-scale 3D acquisition including structure from motion, stereo vision, LIDAR, structured light scanning, as well as challenges in editing, streaming, and rendering massive 3D models.
This document discusses using the Kinect for 3D scanning. It begins with an introduction to 3D scanning and its importance. It then discusses the Kinect hardware and software tools used, including Point Cloud Library (PCL) and drivers. The implementation section explains the process of reading depth data, filtering, downsampling, plane removal, registration and outputting the final point cloud. Key steps include depth filtering, downsampling, plane removal, registration of multiple scans and using Geomagic software for refinement. The Kinect is used to capture 3D data which is then processed using PCL for applications such as animation, games and industrial use.
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...c.choi
1) The document describes a real-time method for estimating and tracking the 3D pose of a rigid object using either a mono or stereo camera.
2) The method combines scale invariant feature matching (SIFT) for initial pose estimation with optical flow-based tracking (KLT) for efficient local pose estimation.
3) Outliers in the tracking are removed using RANSAC to improve accuracy, and tracking restarts from initial pose estimation if the number of inliers falls below a threshold.
The document provides an overview of techniques for generating 3D models from images, including assisted modeling using tools like SketchUp, photogrammetry using tools like ImageModeler and PhotoModeler, and multi-view stereo matching using tools like the Photosynth Toolkit, SfM Toolkit, Python Photogrammetry Toolbox, and Autodesk 123Dcatch. It discusses the underlying principles, calibration processes, dense matching techniques, and final model creation steps involved. The document recommends several free and commercial software tools and provides additional online resources for learning more about these 3D reconstruction methods.
- The document describes using the MicMac photogrammetry software to create orthophotographs, point clouds, and digital surface models (DSMs) from Pleiades satellite images.
- It reviews two papers on using MicMac and discusses their methods, which include image orientation, tie point calculation, sparse point cloud generation, georeferencing, and dense image correlation to produce outputs.
- The results section shows sample outputs including tie points, sparse point clouds, DEMs, shaded relief images, and orthophotographs produced for case studies in the two papers.
This document discusses using the Microsoft Kinect for 3-D mapping of rooms. It describes how the Kinect uses an infrared sensor and RGB camera to create depth images and produce 3D point clouds of environments. The document outlines how SLAM algorithms can then be used to extract visual keypoints from images to localize points in 3D space and build consistent maps over time as the Kinect moves. These maps could be used for robot navigation or teleoperation. The Kinect is presented as a low-cost alternative to more expensive depth sensing systems.
3D Scanners and their Economic FeasibilityJeffrey Funk
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze how the economic feasibility of 3D scanners is becoming better through improvements in lasers, camera ICs, and processor ICs. 3D scanning is both a complement to 3D printing and a technology with its own unique applications. 3D printing of complex objects can be done from a CAD database or from a 3D scan where a 3D scan can be done with laser or other sources of white light such as LEDs.
3D scanning can also be done for other purposes. For example, scientists and engineers are using 3D scanners to survey archeological, construction, crime scene, and engineering sites, to document maintenance and repair of engineered systems, and to customize medical and dental products for humans. Improvements in lasers, LEDs, camera chips, ICs, and other components continue to improve the economic feasibility of 3D scanning. Longer wavelength lasers increase the scanning range, better camera chips improve the scanning resolution, and better lasers, camera chips, and processor ICs reduce the scanning time. For example, third generation scanners from Argon, one leading supplier, have 100 times higher resolution and one tenth the scan times of Argon’s first generation system.
For costs, lasers make up the largest percentage followed by camera and processor ICs. For example, lasers make up 80% of the hardware cost for one high-end system with a current cost of $1346 and a price of about $3000. As laser costs fall and as volumes enable smaller margins, the price of such systems will fall.
For the same reasons, low-end systems continue to emerge. These include Microsoft’s Kinect and an app for the iPhone. Microsoft’s Kinect was $150 while the app was only $4.99, both in early 2013. As such low-end systems proliferate, and high-end systems continue to get cheaper, 3D scanning will find new applications.
This document analyzes KinectFusion, a real-time 3D reconstruction system using a moving depth camera. It introduces SLAMBench, a benchmarking framework for KinectFusion. The document describes the KinectFusion pipeline including preprocessing, tracking, integration and raycasting steps. It evaluates several RGB-D datasets and identifies the Washington RGB-D Scenes dataset as most suitable. It notes drawbacks in KinectFusion like noisy trajectories and inconsistent models. Future work proposed is reducing tracking noise using a Kalman filter.
Preliminary study of multi view imaging for accurate 3 d reconstruction using...eSAT Journals
Abstract This paper presents a multi-view structured-light approach for surface scanning to reconstruct three-dimensional (3D) object using a turntable. It is a modification from DAVID 3D Scanner SLS-1 (Structured-Light Scanner) as a starting point of study on improving and builds a complete system of 3D structured-light based scanner. This type of scanner uses a video projector to project various patterns onto an object which is going to be digitized or reconstruct to a 3D model. At the same time, a camera will record and capture the scene at least one image of each pattern from a certain point of view for example from right, left, above or below of the video projector. Then, 3D meshes of surface of the object will be computed based on the deformations of the projected patterns. The preliminary results show that object which are model of prostheses are successfully reconstructed. Index Terms: 3D scanner, structured-light scanner, 3D reconstruction, and multiple-view
The document discusses 3D laser scanning, including its process, applications, benefits, and drawbacks. 3D laser scanning uses a laser beam to create a point cloud representation of an object's geometric surface by recording distance values within the scanner's field of view. The laser scanner consists of a laser system and camera that passes a laser line over an object's surface to capture 3D data points, allowing accurate models to be created digitally without touching the physical object. Applications include entertainment, 3D photography, and law enforcement. Benefits are saving time on complex modeling and accurate surface representation, while drawbacks include large file sizes and requiring post-processing and high-end technology.
ECWAY Technologies provides IEEE projects and software developments related to image processing, computer vision, and pattern recognition. They have offices in multiple cities in India. The document then lists 75 IEEE project titles from 2013 related to these topics, with many involving MATLAB. The projects cover areas like 3D reconstruction, image restoration, tracking, segmentation, watermarking, and more.
Visualizing the engineering project lifecycle - Unite CopenhagenUnity Technologies
From design to operations, visualization is a powerful tool to drive informed decision-making on major projects. Point clouds, virtual reality and mobile apps are combining to enable better outcomes throughout the engineering industry. Join this session led by Aurecon to learn how Unity can empower engineers to increase efficiency and realize value through every stage of the project lifecycle.
Speaker: Michael Gardiner - Aurecon
Session available here: https://youtu.be/dixtTbGcCFg
Handout from my presentation at Autodesk University 2014 discussing how to coordinate civil and building models between Civil 3D, Revit, Navisworks, and InfraWorks.
2D to 3D dynamic modeling of architectural plans in Augmented RealityIRJET Journal
This document describes a research paper that developed an augmented reality application called AR-CHI-TECH. The application takes a 2D floorplan drawing as input, dynamically generates a 3D model from it using image processing and 3D modeling techniques, and then displays the 3D model in augmented reality. This allows users to better visualize architectural plans. The application was created using the Unity3D game engine and Vuforia augmented reality engine. It processes the 2D floorplan using OpenCV to extract structural elements and convert it into a 3D model file. This 3D model is then overlaid onto the original floorplan image using Vuforia when viewed through a mobile device, allowing users to see the floorplan in 3
Latching onto unity reflect to push rich bim data BIMEngus1
BIM Engineering US is a leading Building Information Technology solution service provider offering end to end solutions in Mechanical, Electrical, Plumbing and Fire Protection systems engineering, design and construction.
SOAR: SENSOR ORIENTED MOBILE AUGMENTED REALITY FOR URBAN LANDSCAPE ASSESSMENTTomohiro Fukuda
This slide is presented in CAADRIA2012 (The 17th International Conference on Computer Aided Architectural Design Research in Asia).
Abstract. This research presents the development of a sensor oriented mobile AR system which realizes geometric consistency using GPS, a gyroscope and a video camera which are mounted in a smartphone for urban landscape assessment. A low cost AR system with high flexibility is realized. Consistency of the viewing angle of a video camera and a CG virtual camera, and geometric consistency between a video image and 3DCG are verified. In conclusion, the proposed system was evaluated as feasible and effective.
GOAR: GIS Oriented Mobile Augmented Reality for Urban Landscape AssessmentTomohiro Fukuda
This slide is presented in CMC2012 (2012 4th International Conference on
Communications, Mobility, and Computing).
Abstract. This research presents the development of a mobile AR system which realizes geometric consistency
using GIS, a gyroscope and a video camera which are mounted in a smartphone for urban landscape assessment. A low cost AR system with high flexibility is developed.
Geometric consistency between a video image and 3DCG are verified. In conclusion, the proposed system was evaluated as feasible and effective.
Invited talk on AR/SLAM and IoT in ILAS Seminar :Introduction to IoT and
Security, Kyoto University, 2020.
(https://www.z.k.kyoto-u.ac.jp/freshman-guide/ilas-seminars/ )
◆登壇者: Tomoyuki Mukasa
This document summarizes HCChang's research interests and experience in dense visual simultaneous localization and mapping (SLAM). It begins with an overview of monoSLAM, PTAM, FAB-MAP and DTAM as examples of visual SLAM techniques. It then provides more detail on KinectFusion, the seminal dense visual SLAM method, and extensions like InfiniTAM, ElasticFusion and DynamicFusion. The document outlines HCChang's background and current work using time-of-flight cameras at EZImage to improve depth sensing. It proposes future work on dense visual SLAM including deploying to Nvidia's TX1 and TK1 platforms, adding loop closures and path optimization, and reconstruct
Markerless motion capture for 3D human model animation using depth cameraTELKOMNIKA JOURNAL
3D animation is created using keyframe based system in 3D animation software such as Blender and Maya. Due to the long time interval and the need of high expertise in 3D animation, motion capture devices were used as an alternative and Microsoft Kinect v2 sensor is one of them. This research analyses the capabilities of the Kinect sensor in producing 3D human model animations using motion capture and keyframe based animation system in reference to a live motion performance. The quality, time interval and cost of both animation results were compared. The experimental result shows that motion capture system with Kinect sensor consumed less time (only 2.6%) and cost (30%) in the long run (10 minutes of animation) compare to keyframe-based system, but it produced lower quality animation. This was due to the lack of body detection accuracy when there is obstruction. Moreover, the sensor’s constant assumption that the performer’s body faces forward made it unreliable to be used for a wide variety of movements. Furthermore, standard test defined in this research covers most body parts’ movements to evaluate other motion capture system.
This document provides an overview and summary of a presentation on Simultaneous Localization and Mapping (SLAM). It introduces the speaker, Dong-Won Shin, and his background and research in SLAM. The contents of the presentation are then outlined, including an introduction to SLAM, traditional SLAM approaches like Extended Kalman Filter SLAM and FastSLAM, efforts towards large-scale mapping like graph-based SLAM and loop closure detection, modern state-of-the-art systems like ORB SLAM, KinectFusion and Lidar SLAM, and applications of SLAM. Key algorithms in visual odometry, backend optimization, and loop closure detection are also summarized.
2008 brokerage 03 scalable 3 d models [compatibility mode]imec.archive
1) There is a trend towards capturing and modeling massive 3D environments and dynamic 4D scenes for applications like virtual worlds, games, and navigation systems.
2) Acquiring and processing large amounts of 3D data poses challenges for technologies related to acquisition, editing, transmission, rendering and presentation as the scale increases.
3) The document discusses various methods for large-scale 3D acquisition including structure from motion, stereo vision, LIDAR, structured light scanning, as well as challenges in editing, streaming, and rendering massive 3D models.
This document discusses using the Kinect for 3D scanning. It begins with an introduction to 3D scanning and its importance. It then discusses the Kinect hardware and software tools used, including Point Cloud Library (PCL) and drivers. The implementation section explains the process of reading depth data, filtering, downsampling, plane removal, registration and outputting the final point cloud. Key steps include depth filtering, downsampling, plane removal, registration of multiple scans and using Geomagic software for refinement. The Kinect is used to capture 3D data which is then processed using PCL for applications such as animation, games and industrial use.
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...c.choi
1) The document describes a real-time method for estimating and tracking the 3D pose of a rigid object using either a mono or stereo camera.
2) The method combines scale invariant feature matching (SIFT) for initial pose estimation with optical flow-based tracking (KLT) for efficient local pose estimation.
3) Outliers in the tracking are removed using RANSAC to improve accuracy, and tracking restarts from initial pose estimation if the number of inliers falls below a threshold.
The document provides an overview of techniques for generating 3D models from images, including assisted modeling using tools like SketchUp, photogrammetry using tools like ImageModeler and PhotoModeler, and multi-view stereo matching using tools like the Photosynth Toolkit, SfM Toolkit, Python Photogrammetry Toolbox, and Autodesk 123Dcatch. It discusses the underlying principles, calibration processes, dense matching techniques, and final model creation steps involved. The document recommends several free and commercial software tools and provides additional online resources for learning more about these 3D reconstruction methods.
- The document describes using the MicMac photogrammetry software to create orthophotographs, point clouds, and digital surface models (DSMs) from Pleiades satellite images.
- It reviews two papers on using MicMac and discusses their methods, which include image orientation, tie point calculation, sparse point cloud generation, georeferencing, and dense image correlation to produce outputs.
- The results section shows sample outputs including tie points, sparse point clouds, DEMs, shaded relief images, and orthophotographs produced for case studies in the two papers.
This document discusses using the Microsoft Kinect for 3-D mapping of rooms. It describes how the Kinect uses an infrared sensor and RGB camera to create depth images and produce 3D point clouds of environments. The document outlines how SLAM algorithms can then be used to extract visual keypoints from images to localize points in 3D space and build consistent maps over time as the Kinect moves. These maps could be used for robot navigation or teleoperation. The Kinect is presented as a low-cost alternative to more expensive depth sensing systems.
3D Scanners and their Economic FeasibilityJeffrey Funk
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze how the economic feasibility of 3D scanners is becoming better through improvements in lasers, camera ICs, and processor ICs. 3D scanning is both a complement to 3D printing and a technology with its own unique applications. 3D printing of complex objects can be done from a CAD database or from a 3D scan where a 3D scan can be done with laser or other sources of white light such as LEDs.
3D scanning can also be done for other purposes. For example, scientists and engineers are using 3D scanners to survey archeological, construction, crime scene, and engineering sites, to document maintenance and repair of engineered systems, and to customize medical and dental products for humans. Improvements in lasers, LEDs, camera chips, ICs, and other components continue to improve the economic feasibility of 3D scanning. Longer wavelength lasers increase the scanning range, better camera chips improve the scanning resolution, and better lasers, camera chips, and processor ICs reduce the scanning time. For example, third generation scanners from Argon, one leading supplier, have 100 times higher resolution and one tenth the scan times of Argon’s first generation system.
For costs, lasers make up the largest percentage followed by camera and processor ICs. For example, lasers make up 80% of the hardware cost for one high-end system with a current cost of $1346 and a price of about $3000. As laser costs fall and as volumes enable smaller margins, the price of such systems will fall.
For the same reasons, low-end systems continue to emerge. These include Microsoft’s Kinect and an app for the iPhone. Microsoft’s Kinect was $150 while the app was only $4.99, both in early 2013. As such low-end systems proliferate, and high-end systems continue to get cheaper, 3D scanning will find new applications.
This document analyzes KinectFusion, a real-time 3D reconstruction system using a moving depth camera. It introduces SLAMBench, a benchmarking framework for KinectFusion. The document describes the KinectFusion pipeline including preprocessing, tracking, integration and raycasting steps. It evaluates several RGB-D datasets and identifies the Washington RGB-D Scenes dataset as most suitable. It notes drawbacks in KinectFusion like noisy trajectories and inconsistent models. Future work proposed is reducing tracking noise using a Kalman filter.
Preliminary study of multi view imaging for accurate 3 d reconstruction using...eSAT Journals
Abstract This paper presents a multi-view structured-light approach for surface scanning to reconstruct three-dimensional (3D) object using a turntable. It is a modification from DAVID 3D Scanner SLS-1 (Structured-Light Scanner) as a starting point of study on improving and builds a complete system of 3D structured-light based scanner. This type of scanner uses a video projector to project various patterns onto an object which is going to be digitized or reconstruct to a 3D model. At the same time, a camera will record and capture the scene at least one image of each pattern from a certain point of view for example from right, left, above or below of the video projector. Then, 3D meshes of surface of the object will be computed based on the deformations of the projected patterns. The preliminary results show that object which are model of prostheses are successfully reconstructed. Index Terms: 3D scanner, structured-light scanner, 3D reconstruction, and multiple-view
The document discusses 3D laser scanning, including its process, applications, benefits, and drawbacks. 3D laser scanning uses a laser beam to create a point cloud representation of an object's geometric surface by recording distance values within the scanner's field of view. The laser scanner consists of a laser system and camera that passes a laser line over an object's surface to capture 3D data points, allowing accurate models to be created digitally without touching the physical object. Applications include entertainment, 3D photography, and law enforcement. Benefits are saving time on complex modeling and accurate surface representation, while drawbacks include large file sizes and requiring post-processing and high-end technology.
ECWAY Technologies provides IEEE projects and software developments related to image processing, computer vision, and pattern recognition. They have offices in multiple cities in India. The document then lists 75 IEEE project titles from 2013 related to these topics, with many involving MATLAB. The projects cover areas like 3D reconstruction, image restoration, tracking, segmentation, watermarking, and more.
Visualizing the engineering project lifecycle - Unite CopenhagenUnity Technologies
From design to operations, visualization is a powerful tool to drive informed decision-making on major projects. Point clouds, virtual reality and mobile apps are combining to enable better outcomes throughout the engineering industry. Join this session led by Aurecon to learn how Unity can empower engineers to increase efficiency and realize value through every stage of the project lifecycle.
Speaker: Michael Gardiner - Aurecon
Session available here: https://youtu.be/dixtTbGcCFg
Handout from my presentation at Autodesk University 2014 discussing how to coordinate civil and building models between Civil 3D, Revit, Navisworks, and InfraWorks.
2D to 3D dynamic modeling of architectural plans in Augmented RealityIRJET Journal
This document describes a research paper that developed an augmented reality application called AR-CHI-TECH. The application takes a 2D floorplan drawing as input, dynamically generates a 3D model from it using image processing and 3D modeling techniques, and then displays the 3D model in augmented reality. This allows users to better visualize architectural plans. The application was created using the Unity3D game engine and Vuforia augmented reality engine. It processes the 2D floorplan using OpenCV to extract structural elements and convert it into a 3D model file. This 3D model is then overlaid onto the original floorplan image using Vuforia when viewed through a mobile device, allowing users to see the floorplan in 3
Latching onto unity reflect to push rich bim data BIMEngus1
BIM Engineering US is a leading Building Information Technology solution service provider offering end to end solutions in Mechanical, Electrical, Plumbing and Fire Protection systems engineering, design and construction.
Application Of Building Information Modeling (BIM) To Civil Engineering ProjectsMichele Thomas
This document summarizes the application of Building Information Modeling (BIM) to a civil engineering project of constructing a 16-floor residential building in Mumbai, India. It describes the methodology used, which involved creating a 3D model in Revit, scheduling the project in Microsoft Project, and linking the 3D model and schedule in Navisworks to generate a 4D model. This allowed visualization of the construction sequence over time. Benefits of the BIM approach included improved collaboration, design coordination, documentation, and visualization compared to conventional methods. The 4D model helped in planning, managing conflicts, and predicting project progress.
Furnspace 3D Intericad T5 Interior Design Softwarefurnspace
The document describes InteriCAD T5, professional interior design software. It has several competitive advantages including an independent CAD platform, user-friendly modeling functions, a large library of updates, material editing tools, and powerful virtual reality technology. The software allows for photo-realistic rendering with fast rendering times. It provides multiple output and presentation formats. InteriCAD T5 is used globally by interior designers for modeling, rendering, 2D design, presentations, and as a business solution.
Storytelling using Immersive TechnologiesKumar Ahir
This is Kickstarter presentation for understanding the domain of Immersive technologies and giving a guide to creating an immersive experience using Unity, Vuforia and Aframe.
Even before we get into how to do StoryTelling using this new media, we need to understand what's possible and where is it heading, which positions us better to design the story and capture it properly.
This help you understand the ecosystem of Immersive Technologies from Business, Product, Design and Development perspective.
Citizen Participatory Design Method Using VR and BlogTomohiro Fukuda
This slide is presented in CAADRIA2008 (The 13th International Conference on Computer Aided Architectural Design Research in Asia).
This research concerned the establishment of a citizen participatory design
method using VR and CGM. For this, problems in the citizen participatory design are
addressed, and the continuous study method using VR and a blog is shown. Then, evaluation
is conducted by considering an actual design project as a case study. Furthermore, VR
functions needed through the case study are developed. Using this method, a small patio
on which parasols were permanently and lawfully set up on a road lot was completed.
Navisworks is a BIM software that allows stakeholders to collaborate by integrating models from other design software. It goes beyond design by enabling real-time walkthroughs, animations, and lighting effects of the actual model. Key benefits of Navisworks include comprehensive model review and clash detection capabilities, support for 2D drawings, tools for creating realistic visualizations and walkthroughs, cloud collaboration features, and precise clash detection across models. Navisworks reduces rework through accurate analysis and helps stakeholders visualize designs.
Build-IT - An Interactive Web Application for 3D Construction, Interior & Ext...Renien Joseph
This presentation was presented at IEEE Conference 2014 - 5th International Conference on Intelligent Systems, Modeling and Simulation (ISMS).
Summary of my final year project :
3D building models is extremely helpful throughout the architecture engineering and construction (AEC) lifecycle. Such models coupled with virtual walk through can enable customers to decide and be satisfied with their dream building. Manually creating a polygonal 3D model of a set of floor plans is nontrivial and requires skill and time. This project introduces and reviews a mechanism for applying interior and exterior design constructs after the conversion of 2D drawings into 3D Building Information Model (BIM).This research demonstrates an automated 3D model reconstruction of real world object from an un-calibrated image sequence targeting the same scene; which can be used for interior and exterior design. There are many key techniques in 3D reconstruction from image sequences, including feature matching, fundamental matrix estimation, projective reconstruction, camera self-calibration, dense stereo matching and Euclidean reconstruction.
This document discusses the application of drone technology in the construction industry. It begins with an introduction on how drones can provide aerial views and data to optimize construction operations. The main objectives, scope and benefits of using drones are then outlined, such as enabling fast site surveying, progress monitoring, and safety inspections. Examples of drone applications in planning, execution and maintenance are provided. Risks like regulatory compliance and insurance are also covered. The document concludes that drones have the potential to revolutionize the construction sector through improved efficiency if technological and regulatory challenges can be addressed.
This document provides the detailed contents for the subject "Computer Applications in Architecture - III". It includes 5 main topics:
1. Fundamentals of 3-D Drafting and exercises in converting shapes to 3D objects.
2. Making existing 2D plans compatible with 3D drafting.
3. 3D modeling, viewing, rendering and importing/exporting models.
4. Using Adobe Photoshop and Corel Draw for rendering 3D blocks.
5. Using PowerPoint for presentations including creating, viewing and editing presentations.
The instructional strategy focuses on giving practical exercises to students to develop proficiency in software. Expert lectures and case studies are recommended to motivate students
This document provides an introduction to the Graphisoft ArchiCAD Step by Step Tutorial. It discusses the concept of a virtual building as modeled in ArchiCAD, compared to traditional 2D CAD drafting. It also profiles how different architecture firms utilize ArchiCAD's virtual building tools in their design and documentation workflows. The introduction concludes by explaining how the step-by-step book and interactive content on the accompanying CD-ROM are intended to be used together to guide users through exercises to learn ArchiCAD.
3D-ICONS - D5.1: Report on 3D Publication Formats Suitable for Europeana3D ICONS Project
This document discusses suitable formats for publishing 3D content from the 3D-ICONS project on Europeana. It analyzes current 3D content and technologies on Europeana, including 3D PDF, WebGL, VRML, and plugins. The document evaluates technologies like 3D PDF, WebGL, Unity3D, Unreal Engine, and pseudo-3D for publishing a range of 3D models while meeting Europeana's requirements. Based on the analysis, the document proposes 3D PDF, WebGL, Unity3D, and pseudo-3D as formats that can effectively publish 3D-ICONS content for maximum accessibility and long-term preservation.
Carrie Dossick- Skanska, Greg Howes- Idea Building HomesSeriousGamesAssoc
"Using 3D Technology to Architect Communities"
Not unlike a conference such as Serious Play, Construction is a production. Lots of different people work with big toys (dump trucks and tower cranes) to put big (and small) building components together (concrete walls, asphalt roofs, door handles and electrical sockets). The production of buildings requires work and play at many levels: architects, engineers, contractors, suppliers, consultants, trades, users, developers, and operators all engage the project in different and important ways. Consequently, modern construction projects are multi-player games where communication is critical to success. In this talk we share how a collaboration between academia and industry explored gaming platforms, Virtual Worlds, with emerging construction modeling tools known as Building Information Modeling (BIM) for better collaboration, communication, and team engagement.
In this keynote presentation, Matthew will review the workflow process DeWalt used to capture an entire 200,000 sq.ft building in just 2 days with the use of a FARO Focus3D Laser Scanner, register the entire project without the use of any scan targets using SCENE, and upload the data using a new web-based application known as Web Share Cloud. This workflow solved many of the complications in the beginning stages of this project, and will continue to help the project through its lifecycle over the next couple years.
This document summarizes Clement Chen's internship at Akipanel Architects Sdn Bhd from January to March 2017. During the internship, Clement worked on several projects including designing a clubhouse, rendering a Buddhist retreat development, and coloring CAD drawings. Clement learned about design considerations, rendering techniques, and preparing submission drawings. The internship exposed Clement to different project types and improved his AutoCAD, 3D modeling, and rendering skills.
This internship report summarizes the student's internship experience at Citylab Studio, an architecture firm in Kuala Lumpur, Malaysia. It describes the various stages and tasks the student was involved in during projects, including schematic design with 3D modeling and renderings, design development with detailed drawings, and permit application support with site visits. The student gained exposure to software like Rhino, Revit, Lumion and Photoshop and assisted with tasks like modeling, drawing, rendering, and material selection. Site visits provided guidance on construction methods and compliance with local authorities' regulations. Overall, the internship helped the student understand the workflow in an architecture practice.
The Poster of Application of terrestrial 3D laser scanning in building inform...Martin Ma
The document discusses the methodology used in a project applying terrestrial 3D laser scanning in building information modeling (BIM). It describes using a Leica Scan Station C10 laser scanner to collect point cloud data of a target building. Cyclone software was then used to combine the point cloud data with a 3D BIM. Autodesk Revit, a BIM software, was used to create 3D models from the point cloud data. The project involved laser scanning a ship garage to collect point cloud data, processing the raw data in Cyclone, and creating a 3D BIM of the garage in Revit based on the point cloud.
The document outlines a project to build a 3D printer using a serial SCARA configuration with an MKS Gen 1.4 board and Marlin software. The 4-member team has completed collecting parts, preparing a bill of materials, CAD modeling, and assembly/fabrication. Remaining tasks include electronics testing, programming, and calibration. The goal is to create an affordable, portable 3D printer with auto bed leveling and good print quality.
Similar to CAADRIA2014: A Synchronous Distributed Design Study Meeting Process with Annotation Function (20)
The document is composed entirely of copyright notices attributed to Tomohiro Fukuda and does not contain any other substantive information. It appears to be a list of copyright notices without any accompanying text, images, or other context.
DEVELOPMENT OF USE FLOW OF 3DCAD/VR SOFTWARE FOR CITIZENS WHO ARE NON-SPECIAL...Tomohiro Fukuda
This slide is presented in CAADRIA2010 (The 13th International Conference on Computer Aided Architectural Design Research in Asia).
The purpose of this study is development of a tool by which citizens who are non-specialists can design a regional revitalization project. Therefore, a 3DCAD/VR (3-Dimensional Computer Aided Design/Virtual Reality) combination system was developed by using SketchUP Pro, GIMP, and UC-win/Road. This system has the advantages of low cost and easy operation. The utility of the system was verified as a result of applying the developed prototype system in the Super Science High School program for high school students created by the Ministry of Education, Culture, Sports, Science and Technology, Japan. It has been used for two years, since 2007. In addition, the characteristics of the VR made by the non-specialists were considered.
Development and Evaluation of a Representation Method of 3DCG Pre-Rendering A...Tomohiro Fukuda
This ppt is presented in CAADRIA2008 (The 13th International Conference on Computer Aided Architectural Design Research in Asia).
As a method of dissemination of environmental symbiosis design towards
environmental problem solutions, 3DCG pre-rendering animation (3DCGPRA) which has
a high quality of representation and has a powerful appeal, is expected to be particularly
effective. After arranging components required in an environmental symbiosis design, the
representation targets which needed to be developed were clarified. In addition,
representation methods of shade and shadow, grass, human activity, and symbiosis methods
etc. were developed. In a real project, a 7’ 3DCGPRA was created applying these new
methods, and its validity was evaluated.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Essentials of Automations: The Art of Triggers and Actions in FME
CAADRIA2014: A Synchronous Distributed Design Study Meeting Process with Annotation Function
1. A SYNCHRONOUS DISTRIBUTED
DESIGN STUDY MEETING
PROCESSWITH ANNOTATION
FUNCTION
TOMOHIRO FUKUDA 1), LEI SUN 1), KEISUKE MORI 2)
1) Division of Sustainable Energy and Environmental Engineering,
Graduate School of Engineering, Osaka University, Japan
2) Atelier DoN, Japan
CAADRIA2014, Kyoto
2. Contents
1. Introduction
2. State of the Art
3. Cloud ComputingTypeVR and Experimental Plan
1. Annotation Function of Cloud-VR
2. Experimental Plan
4. Results and Discussion
1. Results
2. Discussion
5. Conclusion
2
3. Contents
1. Introduction
2. State of the Art
3. Cloud ComputingTypeVR and Experimental Plan
1. Annotation Function of Cloud-VR
2. Experimental Plan
4. Results and Discussion
1. Results
2. Discussion
5. Conclusion
3
4. 1.Introduction
In recent years, architectural and urban design meetings using VR to share
3D images have been held in a single room and at a certain scheduled time
at practical level.
4
Virtual Design Studios (VDS) have been constructed exploiting new computing
and communication technologies (Wojtowicz 1994, Maher 1999, Kvan 2000, Matsumoto 2006).
VDS system developments and design trials of an asynchronous distributed
type are mostly used allowing stakeholders to participate in the design
process at various places and at different times.
Mobility of people's activities and cloud computing technologies have
progressed rapidly in the period of information and globalization.
5. 1.Introduction
5
In this research, as a result from previous approaches (Chapter 2), we defined
the following research questions:
“How can a design team advance their design study in a synchronously
distributed type of environment by using the cloud computing type of VR
(cloud-VR) and its annotation function – allowing freehand sketching in a 3D
virtual environment?”
TIME
Same (synchronous) Different (asynchronous)
SPACE
Same
(face to face)
Same time, Same place
• Electronic meeting system
• Group decision support systems
Different time, Same place
• Digital Kiosk
Different
(distribution)
Same time, Different places
• Video conference
• Telephone
Different times, Different places
• E-mail
• Bulletin Boards
• SNS (Blog)
A synchronous distributed type inTime and Space Matrix
6. Contents
1. Introduction
2. State of the Art
3. Cloud ComputingTypeVR and Experimental Plan
1. Annotation Function of Cloud-VR
2. Experimental Plan
4. Results and Discussion
1. Results
2. Discussion
5. Conclusion
6
7. 2.State of theArt
7
In a synchronous distributed environment, some design supporting systems
for sharing 3D virtual space have been presented:
Dorta (2011)
Users physically immersed in sketches and physical models and shared
them remotely.
Shen (2010)
Developed a visualization tool on a multi-user platform to represent
design alternatives
Data volume of design content is usually large. GPU is required.
The speaker and listener could not organize a conversation by sharing the viewpoint meanwhile.
Sun (2013)
The first approach towards a synchronous distributed design meeting
system on non immersive cloud-VR
No report has investigated the impact of synchronous distributed cloud-VR meetings in an
architectural design process.
This research focused how a design team can advance their design study
in a synchronously distributed type of environment by using the
annotation function which allows freehand sketching in a 3D virtual
environment.
8. Contents
1. Introduction
2. State of the Art
3. Cloud ComputingTypeVR and Experimental Plan
1. Annotation Function of Cloud-VR
2. Experimental Plan
4. Results and Discussion
1. Results
2. Discussion
5. Conclusion
8
9. 3.1. Annotation Function of Cloud-VR
3D-VR contents are transmitted by the video compression technique of the
H.264 standard from the cloud-VR server.
Real-time 3D rendering in the server is quickly transmitted and do not
require a well-GPU-equipped computer for client. More than 10 participants
can share a viewpoint, alternatives, or theVR setup in synchronisation.
3. Cloud Computing Type VR and Experimental Plan
Creating 3D by
OpenGL
Compression by
H.264
Controller
Displaying Video
User’s Input
Cloud-VR Server Cloud-VR Client
HTTP
Android “VRcloud” Windows
10. 3.1. Annotation Function of Cloud-VR
10
3. Cloud Computing Type VR and Experimental Plan
When using 3D virtual space to study design approaches, stakeholders expect
to be able to draw sketches and add figures and memos on the 3D virtual
space. The annotation function has been developed and presented to realize
this requirement (Sun 2013).
11. 3.2. Experimental Plan
11
To consider the case of a collaborative architectural design meeting, we
assumed an early design stage project to reconstruct a low layer residence
which had become obsolete due to collective housing developments.
3. Cloud Computing Type VR and Experimental Plan
Conditions for the target site:
17.6m in building width, 6.8m in building depth, 12m in road width
Building coverage ratio: 80%
Floor area ratio: 600%
A business district, a fire protection zone
12. 3.2. Experimental Plan
12
3. Cloud Computing Type VR and Experimental Plan
Three designers who were in different locations, used Windows PCs on which were
installed cloud-VR and Google Hangouts as a video conference system.
Designer 1 in Chiba, Japan: Architect with practical experience and created an
architectural plan.
Designer 2 in Osaka, Japan: Good skills in VR operation and understood the
current situation of the target site well.
Designer 3 in Heidelberg in Germany: Documented the experimentation.
Experimentation: July 2013, 2Days
13. Contents
1. Introduction
2. State of the Art
3. Cloud ComputingTypeVR and Experimental Plan
1. Annotation Function of Cloud-VR
2. Experimental Plan
4. Results and Discussion
1. Results
2. Discussion
5. Conclusion
13
16. 4.1. Results:Design Process DAY 1
16
4. Results and Discussion
ID Time
(m:s)
Cloud-VR
operation
Main
speaker
Typical conversational content
01 0:00 Designer 2 Purpose of the design meeting was explained.
02 2:00 Designer 2 Designer 2 Designer 2 acquired the operation authority and explained the
current situation of the target city.
03 3:45 Designer 2 Designer 2 Designer 2 explained the target building site.
04
*
4:05 Designer 2 Designer 2 On Designer 1’s request, Designer 2 marked the target building
site using the annotation function. The dimensions of the site and
the status of the surrounding terrain were confirmed.
05 5:45 Designer 2 Designer 1 The construction condition were confirmed.
06 6:45 Designer 2 Designer 1 On Designer 1’s request, Designer 2 operated VR to check access
from the railway station and views of the building site.
07 9:50 Designer 2 Designer 1 The buildable construction volume was confirmed.
08 12:20 Designer 2 Designer 1 The buildable area per floor was confirmed. An entrance to the
rental housing, and a store were planned on the first floor. Rental
housing was planned from the second to the 7th floor.
09 14:00 Designer 1 Designer 1 The operation authority was changed to Designer 1.
10
*
14:25 Designer 1 Designer 1 Using the annotation function, from a bird's-eye view of the site,
Designer 1 sketched the planar shape of the first floor of
the building.
11
*
15:05 Designer 1 Designer 1 Using the annotation function, from a bird's-eye view of the site,
Designer 1 sketched the common areas of the first floor
level (plan 1). A concept of plan 1 was presented.
12
*
16:05 Designer 1 Designer 1 Designer 1 sketched the common areas of the first floor
level (plan 2). A concept of plan 2 was presented.
13
*
18:30 Designer 1 Designer 1 From a bird's-eye view that was closer to the building site, using the
annotation function, Designer 1 sketched the volume of the
planning building.
14 21:55 Designer 2 Designer 2 The operation authority was changed to Designer 2. The scenery
seen from the window of the planned building was reviewed.
15 27:20 Designer 2 Designer 1 The content of the next meeting was confirmed.
16 27:50 Designer 2 Meeting ended.
Three designers made
themselves familiar with
the conditions and the
present situation of the
site using fly-through and
walk-through operations
in the 3D virtual space of
the cloud-VR.
Designer 1 examined the
building volume to
determine the design
conditions by building
coverage and floor area
ratio.
As a result, it was
decided that a seven-
storey building could be
built.
17. 4.1. Results:Design Process DAY 1
17
4. Results and Discussion
Scene 10
Scene 12
Scene 13
Experimental Scene
ID Time
(m:s)
Cloud-VR
operation
Main
speaker
Typical conversational content
01 0:00 Designer 2 Purpose of the design meeting was explained.
02 2:00 Designer 2 Designer 2 Designer 2 acquired the operation authority and explained the
current situation of the target city.
03 3:45 Designer 2 Designer 2 Designer 2 explained the target building site.
04
*
4:05 Designer 2 Designer 2 On Designer 1’s request, Designer 2 marked the target building
site using the annotation function. The dimensions of the site and
the status of the surrounding terrain were confirmed.
05 5:45 Designer 2 Designer 1 The construction condition were confirmed.
06 6:45 Designer 2 Designer 1 On Designer 1’s request, Designer 2 operated VR to check access
from the railway station and views of the building site.
07 9:50 Designer 2 Designer 1 The buildable construction volume was confirmed.
08 12:20 Designer 2 Designer 1 The buildable area per floor was confirmed. An entrance to the
rental housing, and a store were planned on the first floor. Rental
housing was planned from the second to the 7th floor.
09 14:00 Designer 1 Designer 1 The operation authority was changed to Designer 1.
10
*
14:25 Designer 1 Designer 1 Using the annotation function, from a bird's-eye view of the site,
Designer 1 sketched the planar shape of the first floor of
the building.
11
*
15:05 Designer 1 Designer 1 Using the annotation function, from a bird's-eye view of the site,
Designer 1 sketched the common areas of the first floor
level (plan 1). A concept of plan 1 was presented.
12
*
16:05 Designer 1 Designer 1 Designer 1 sketched the common areas of the first floor
level (plan 2). A concept of plan 2 was presented.
13
*
18:30 Designer 1 Designer 1 From a bird's-eye view that was closer to the building site, using the
annotation function, Designer 1 sketched the volume of the
planning building.
14 21:55 Designer 2 Designer 2 The operation authority was changed to Designer 2. The scenery
seen from the window of the planned building was reviewed.
15 27:20 Designer 2 Designer 1 The content of the next meeting was confirmed.
16 27:50 Designer 2 Meeting ended.
18. 4.1. Results:Between DAY 1 and DAY 2
Designer 1 created the drawing in the schematic design phase based on the
initial sketches made on DAY 1.Then, Designer 2 created a 3D virtual model
of the building by using SketchUP and imported this to the cloud-VR server.
18
4. Results and Discussion
19. ID Time
(m:s)
Cloud-VR
operation
Main
speaker
Typical conversational content
01 0:00 Designer 2 The purpose of the design meeting was explained.
02 0:30 Designer 2 Designer 2 Designer 2 acquired the operation authority and displayed the 3D
building model created based on the meeting of DAY 1 in the 3D
virtual space.
03 1:03 Designer 1 Designer 1 The operation authority was changed to Designer 1.
04
*
1:33 Designer 1 Designer 1 While overlaying the sketch on the 3D models, Designer 1
presented the zoning of the space using the annotation
function.
05 4:20 Designer 1 Designer 1 Designer 1 presented the concept of space design. Both the 6th and
7th floors are designed as one dwelling unit. The forms were
considered from a sky exposure plan of the front road.
06
*
7:00 Designer 1 Designer 1 From bird's-eye view that was closer to the building site, using the
annotation function, Designer 1 explained the elongation of
the windows necessary in order for the structure to be
used for residential housing with a fire protection system.
07 10:55 Designer 2 Designer 2 The operation authority was changed to Designer 2. After entering
the building interior, Designer 2 moved the building interior space
via a walk-through. Designers 2 and 3 reviewed the view from inside
the building and the window.
08 16:00 Designer 2 Designer 2 Designers 2 and 3 reviewed the view from the 5-7th floors and
common areas.
09 23:00 Designer 2 Designer 2 Designers 2 and 3 reviewed the building façade from outside the
building.
10
*
29:15 Designer 1 Designer 1 The operation authority was changed to Designer 1. While sketching
using the annotation function, Designer 1 studied the sash and
balcony of the building.
11
*
33:45 Designer 2 Designer 2 The operation authority was changed to Designer 2. Designer 2
proposed the façade design.
12
*
35:15 Designer 1 Designer 1 The operation authority was changed to Designer 1. While sketching
using the annotation function, Designer 1 studied the building
facade.
13 36:50 Designer 2 Meeting ended.
4.1. Results:Design Process DAY 2
19
4. Results and Discussion
A more detailed design
examination was carried
out.
20. 4.2. Discussion
20
Through the collaborative design work over two days, the synchronously and
remotely cloud-VR meetings with freehand sketching function were finished
as we expected. The annotation function was used effectively when Designer
1 drew the zone shapes of space composition, the volume shape of the
planned building etc. in the schematic design phase.
Using the annotation function, a designer can draw directly by overlapping
the sketch in 3D virtual space. Design activity has traditionally been carried
out only in the imagination of the designer. Owing to the annotation function,
design participants could share a concrete design image and could study the
design interactively.
On the other hand, in an actual design work, it is hard for a designer to study
a design only by using the screen of a VR perspective drawing. For an
accurate understanding of the scale, orthographic drawing are also required.
4. Results and Discussion
21. 4.2. Discussion
21
Technical problems with the annotation function were found:
When a designer will draw sketches, the operation authority must be
passed from the designer who previously had the operation authority. In
order to pass the operation authority, it is necessary to terminate the
annotation function once after saving. During the meeting, this operation
interrupted the designers’ conversation and thinking.
During a designer drawing a sketch using the annotation function, the
viewpoint of the 3D virtual space could not be moved. In the experiment,
the designer who was sketching requested a function to zoom in/out on
the design object more.
If the 2D sketch drawn was converted into a 3D model automatically, a
quicker study from various viewpoints can be possible.
4. Results and Discussion
22. Contents
1. Introduction
2. State of the Art
3. Cloud ComputingTypeVR and Experimental Plan
1. Annotation Function of Cloud-VR
2. Experimental Plan
4. Results and Discussion
1. Results
2. Discussion
5. Conclusion
22
23. 5. Conclusion
23
This research investigated the possibilities for synchronously distributed
cloud-VR meetings in an architectural design process.
The experimentation of collaborative design work at the early stage of a
housing renovation project was executed. The synchronously distributed
cloud-VR meetings with freehand sketching function were finished by three
designers in two days. The proposed system to share a 3D virtual space in
regard to viewpoint, plan, sketch and other information synchronously and
remotely was examined.
The annotation function was used effectively when designers drew the zone
shapes of space composition, volume shape of the planning building and so on.
Through the experiment, some problems of the proposed design
environment and the annotation function were clarified. Future work should
attempt to solve the problems.
24. Acknowledgements and References
24
References
Dorta T., Kalay Y., Lesage, A., Perez, E.: 2011, Comparing Immersion in Remote and Local
Collaborative Ideation through Sketches:A Case Study, CAADFutures2011, 25-39.
Kvan T.: 2000, Collaborative design: what is it?, Automation in Construction, 9(4), 409-415.
Maher M. L., Simoff S.: 1999, Variations on a Virtual Design Studio, Proceedings of Fourth
InternationalWorkshop on CSCW in Design, 159-165.
Matsumoto Y., et al.: 2006, Supporting Process Guidance for Collaborative Design Learning
on the Web; Development of "Plan-Do-See cycle" based Design Pinup Board, CAADRIA2006,
72-80.
Shen, Z. and Kawakami, M.: 2010, An online visualization tool for Internet-based local town-
scape design, Computers, Environment and Urban Systems, 34(2), 104-116.
Sun, L., et al.: 2013, A Synchronous Distributed VR Meeting with Annotation and Discussion
Functions, CAADRIA2013, 447-456.
Wojtowicz J.: 1994,Virtual Design Studio, Hong Kong University Press, Hong Kong.
Acknowledgements
We would like to thank FORUM8 Co., Ltd. for the technical support.