SlideShare a Scribd company logo
1/10
BAHAGIAN A
PART A
PROJECT PROGRESS REPORT
Tarikh Pendaftaran Pertama
First Date of Registration
8 September 2015
Tarikh Tamat
Date of Conclusion
25 January 2017
BAHAGIAN B
PART B
MAKLUMAT KURSUS
COURSE INFORMATION
Jumlah Pengecualian Kredit / Kursus Yang Diluluskan Oleh Universiti (jika berkaitan) :
Total credit exemption / courses approved by the University (If related)
[none]
Senarai Kursus Yang Didaftar Semester Ini
List of Courses Registered for This Semester
Nama Kursus
Courses
1) PENYELIDIKAN
2) KAEDAH PENYELIDIKAN
Kod
Code
KEE10100
KEE10200
Kredit
Credit
0
0
BAHAGIAN C
PART C
MAKLUMAT PENYELIDIKAN
RESEARCH INFORMATION
Tajuk
Title
FPGA-based Implementation of Safe Trajectory Estimation for Unmanned Vehicles using Photogrammetry Navigation Techniques
Latar Belakang
Background
In the last five years, we witnessed a worldwide emergence of a new class of robots: the unmanned vehicles, from drones (i.e: Unmanned Aerial
Vehicles - UAVs), remotely operated underwater vehicles (ROVs), inflatable dirigibles balloons (airships), to driverless cars and all other amateur
and hobbyists gadgets in between, like the popularized quadcopters.
They are now more affordable than anytime before, and are slowly but surely populating our roads, waters and skies, to the point they will become
ubiquitous elements in the traffic and logistics. But till today, these vehicles are only being guided remotely by on-ground human operators who are
prone to error and time-conscious.
In this study, we want to take the application of unmanned vehicles to a new level, by sparing them the need for a human operator, and making
fully autonomous. This will be possible by harnessing the power of two computer vision methods that are essential parts in photogrammetry
technology: stereo vision depth and structure from motion (SfM). Our contribution will allow the unmanned vehicle to be auto-aware of the dangers
and obstructions that will cross its way without any human intervention.
Penyataan Masalah
Problem Statements
The main issue that is daunting engineers is to how to guide these robots in unrecognizable zones, while saving working hours and stress for
ground-operators. For UAVs, the common practice is to drive the unmanned vehicle from distance, where an operator will have a bird's-eye view
thought the camera(s) fixed on the unmanned vehicle. But this solution makes the UAV dependent of the human side, who can be prone to error,
misjudgement, incompetent and who does not have a full grasp of the actual flight or driving condition. Besides that, the human-operated UAVs
have no autonomy of decision, and thus are not able to quickly react and counter abrupt events, which occur very often during navigation.
Besides that, these teleguided unmanned vehicles are exposed to some serious technical limitations and threats:
(a) They can be stranded and cut-off from communication in GPS-denied areas like tunnels, undergrounds, or when encircled by concrete walls;
and
(b) In the case of a security breach to their transmission protocol, their communication with ground control can be compromised by malicious signal
hijacking, or also by non-deliberate signal interferences.
The proposed solution for this problematic of navigation autonomy resides in the design of an intelligent trajectory estimation system. This system
will predict and trace a safe itinerary (i.e: planned path) clear from all sorts of obstacles. The unmanned vehicle will then only have to follow the
coordinates of this itinerary until reaching its given destination. This itinerary must be continuously auto-adapting itself with the real-time change of
environment, traffic movement, and the occurrence of unexpected obstacles, especially the moving ones, as in Figure 1.
2/10
Figure 1: A representation of the proposed UAV obstacle-aware system being put in the field
In Figure 1, it is clear that the blue trajectory is the shortest, but it crosses many obstacles. The FPGA fed by images from the two cameras on-
board, will determine the red itinerary intelligently, so to closely and safely get around these obstacles, without making the new trajectory too time
and resource exhausting, especially for power-conscious vehicles like in this example of the Micro Aerial Vehicle (MAV).
A highly promising method to make such trajectory possible is the Stereo Vision Depth, which will procure the unmanned vehicle the ability to
appraise the relative distance (i.e: depth map) that separates it from the different objects surrounding it. Then we would write an FPGA routine that
continuously refers to that depth map in order to determine where the unmanned vehicle is located with respect to surrounding objects. It then
figures out what obstacles are in its path and compute how much it has to steer to avoid them.
But the exclusive novelty that we are bringing in our research is the incorporation of a unique technique commonly termed Structure from Motion
commonly abbreviated as “SfM”. This set of computer vision algorithms constructs 3-D models of landscapes with considerable accuracy by only
analyzing a single 2-D view of the scene in motion. The primary function of SfM in our proposed system is to execute the Simultaneous Localization
and Mapping (SLAM) that will ensure our unmanned vehicle is always on track on its safe trajectory heading toward its predetermined destination,
and at the same time suggests the optimal route to reach that destination.
Matlamat
Aim
So our solution’s aim is to make these two methods (i.e: stereo vision depth and SfM) coexist on the same System-on-Chip (SoC), and to apply
them in real autonomous navigation situation (i.e: test them in the field), so that one method can mitigate the error that the other could have
propagated, in the manner of a hybrid system. This implies building a field programmable gate array (FPGA) prototype and mounting it on a
binocular mid-range drone for validation and deployment. MATLAB has been selected as our abstraction tool, and Vivado HLS will be the
middleware between the HDL and the FPGA. It is expected that this research will yield an FPGA design that is capable of piloting an unmanned
vehicle by sensing surrounding dangers toward its path.
Skop
Scope
(a) Range of Hardware Used
The specific MAV model we will use is based on the PIXHAWK Cheetah that was developed at the ETH Zürich.
The PIXHAWK Cheetah is a Quadcopter (4 rotors). On-board this MAV we can find:
1- A custom-designed microprocessor board that serves as:
a- An Inertial Measurement Unit (IMU), and
b- A low-level flight controller, for steering the MAV to the desired target waypoint. The low-level flight control software is built around
an unmodified PID controller provided by the PIXHAWK project, and which consists of separate pose and attitude controllers.
2- A custom-made carrier board for a COM-Express single board computer. That is where a single board computer or an FPGA SoC can be
fitted.
(b) Hardware and FPGA-based Configuration
Obstacles
Possible
Itinerary to join
Real Itinerary
Departure Point
Arrival Point
Real Itinerary Taken
Shortest Distance
3/10
1- Overview of the system implementation:
The hardware designs of both custom circuit boards, as well as the low-level flight control software, have been made available as open source
and can be obtained from the PIXHAWK website.
With this software, it is possible to control the MAV if its current pose (position and orientation) is known. While it is the task of the FPGA to
determine the MAV’s pose using our photogrammetric algorithms, the attitude is estimated using the inertial sensors that are available on the
IMU (microprocessor board). For steering the MAV, the FPGA can transmit a desired target position to the control software, which then
attempts to approach it. It is thus possible to implement autonomous flight options for this MAV by letting the FPGA generate a series of
desired target positions. The system design of the presented quad-rotor MAV was previously visualized in Figure 5.
Our MAV would be equipped with two (2) greyscale (monochrome) USB cameras, which are operated with a resolution of 640 x 480 (VGA)
at 30 Hz (frame per seconds).
The cameras are mounted in a forward facing stereo configuration with a baseline of 11 cm. This requirement is fulfilled by the USB Firefly MV
cameras from Point Grey®.
2- Development tools:
Since the early stages of this research, we will set MATLAB as our simulation and abstraction tool. This would need occasionally to be
complemented by Simulink® so that the process that we described above could be visualized in a parallel fashion.
To assist us in HDL coding for imagery, several fundamental toolboxes will be installed to the MATLAB environment.
The synthesis will be made on an FPGA of the Family Xilinx Zynq®-7000 All Programmable SoC (device model 7Z020). This particular SoC
has been very popular and hence, several independent companies produced the development boards around the Zync-7000. The full-options
Zedboard is one of these boards. It is even cheaper than the one proposed by Xilinx, and it was even let to us a special student price after
confirming your status as member of a recognized academic institution, using the university email.
This board comes with a licence for Xilinx Vivado® Design Edition that is locked to the device model of the Zedboard (i.e: 7Z020). This design
suite will take the abstracted algorithms directly from MATLAB, and with some tuning and trial-and-error checks, it will be able to transport the
assembled code to the FPGA, while providing metric statistics about power and performance that we need in our further results publications.
To validate the effectiveness of our proposed solution, we will attempt few experimental missions using a commercial UAV, by setting a flight
scenario where the UAV has to:
(1) Take-off;
(2) Find its own way to reach the destination point; then
(3) Execute a safe landing.
This challenge has to be completed autonomously to declare that our system has met its immediate scope.
This empirical undertaking can hold true for the larger scope, namely for the other types of unmanned vehicles. Since UAVs supplant them all by
possessing six (6) Degrees of Freedom (DOF) -the largest possible of any vehicle-, and by being highly manoeuvrable. This automatically gives our
system retro-support for vehicles of less than six (6) DOF, except for some adjustments to be performed on the Linear-Quadratic Regulator (LQR)
controller that pertains to the control of UAV’s actuators.
So by fulfilling this validation benchmark, the bigger scope of driving autonomously the ground as well as underwater vehicles would be
encompassed too.
Objektif
Objectives
The prospects of autonomous vision-guided vehicles are large, and there is a real possibility to tackle the problem using the hybrid approach of
Stereo Vision backed by SfM. By looking at these two factors, we have set this proposed project to aim at the following targets:
(1) To examine the ways to significantly improve on the existing stereoscopic vision and structure from motion algorithms;
(2) to propose new stereoscopic vision and structure from motion architecture, emulated and bundled on FPGA for trajectory estimation;
and
(3) to evaluate the proposed system both in the perspective of hardware implementation (in terms of area footprint, power consumption and
processing speed) as well as its viability as an effective and reliable navigational system.
Kajian Literatur
Literature Review
(a) Research Background
With stereo vision we refer to all cases where the same scene is observed by two cameras at different viewing positions. Hence, each camera
observes a different projection of the scene, which allows us to perform inference on the scene’s geometry. The obvious example for this
mechanism is the human visual system. Our eyes are laterally displaced, which is why we observe a slightly different view of the current scene with
each. This allows our brain to infer the depth of the scene in view, which is commonly referred to as stereopsis. Although it has for long been
believed that we are only able to sense the scene depth for distances up to few meters, Palmisano et al. [1] recently showed that stereo vision can
support our depth perception abilities even for larger distances.
Using two cameras and methods from computer vision, it is possible to mimic the human ability of depth perception through stereo vision. An
introduction to this field has been provided by Klette [2]. Depth perception is possible for arbitrary camera configurations, if the cameras share a
sufficiently large common field of view. We assume that we have two idealized pinhole-type cameras C1 and C2 with projection centers O1 and
4/10
O2, as depicted in Figure 2. The distance between both projection centers is the baseline distance b. Both cameras observe the same point p,
which is projected as p1 in the image plane belonging to camera C1. We are now interested in finding the point p2, which is the projection of the
same point p on the image plane of camera C2. In literature, this task is known as the stereo correspondence problem, and its solution through
matching p1 to possible points in the image plane of C2 is called stereo matching.
Figure 2: Example of the epipolar geometry
In order to implement stereo vision depth awareness on unmanned vehicles, we have first to solve this stereo matching problem, which goes back
to the question of how to make the FPGA able to tell that two points on two images taken of the same scene belongs to the same scene feature.
To achieve this result, we have to go through three (3) main stages. We will elaborate on each of them in the following sections, while pointing to
the limitations observed and how we intend to tackle them in our proposed research.
(1) Image rectification:
The common approach to stereo vision includes a preliminary image rectification step, during which distortions are corrected. The resulting image
after rectification should match the image received from an ideal pinhole camera. To be able to perform such a correction, we first require an
accurate model of the image distortions. The distortion model that is most frequently used for this task today, is the one introduced by Brown [3].
Using Brown’s distortion model, we are able to calculate the undistorted image location (ũ, ṽ) that corresponds to the image location (u, v) in the
distorted image:
Existing implementations of the discussed algorithms can be found in the OpenCV library (Itseez, [4]) or the MATLAB camera calibration toolbox
(Bouguet, [5]), and that is how we plan to resolve this question of image rectification.
(2) Sparse vision method:
Despite the groundbreaking work by [6][7][8][9][10] and [11], there is a gap regarding the speed performance of their systems. Our examinations of
their work revealed that they have employed dense stereo matching methods which considers search of matching points in the entire input stereo
images, thus increasing the computational load of their systems. One way to greatly speed-up stereo matching is to not process all pixel locations
of the input images. While the commonly used dense approaches find a disparity label for almost all pixels in the reference image (i.e: usually the
left image), sparse methods like in [12] and [13], only process a small set of salient image features. An example for the results received with a
sparse compared to a dense stereo matching method can be found in Figures 3a and 3b.
(a) (b)
Figure 3: (a) Sparse stereo matching results received with the presented method and (b) dense results received from a belief propagation based
algorithm. The color scale corresponds to the disparity in pixels. [13]
The shown sparse example is precisely what we intend to apply in this research, which only finds disparity labels for a set of selected corner
features. The color that is displayed for these features corresponds to the magnitude of the found disparity, with blue hues representing small and
red hues representing large disparity values. The method used for the dense example is the gradient-based belief propagation algorithm that was
Epipolar Plane
Image
Plane 1
Epipolar
Line 1
p1
p2
Baseline
O1
Epipoles
O2
Image
Plane 2
Epipolar
Line 2
p
…….. (1)
…….. (2)
5/10
employed by Schauwecker and Klette [14] and Schauwecker et al. [15]. The results of this algorithm are dense disparity maps that assign a
disparity label to all pixels in the left input image.
Although sparse methods provide much less information than common dense approaches, this information can be sufficient for a set of
applications, including UAV trajectory estimation and obstacle avoidance such as proposed here in our research.
(3) Feature detection:
In computer vision, a feature detector is an algorithm that selects a set of image points from a given input image. These points are chosen
according to detector-specific saliency criteria. A good feature detector is expected to always select the same points when presented with images
from the same scene. This should also be the case if the viewing position is changed, the camera is rotated or the illumination conditions are
varied. How well a feature detector is able to redetect the same points is measured as repeatability, for which different definitions have been
postulated by Schmid et al. [16]; and Gauglitz et al.[17].
Feature detectors are often used in conjunction with feature descriptors. These methods aim at providing a robust identification of the detected
image features, which facilitates their recognition in case that they are re-observed. In our case, we are mainly interested in feature detection and
less in feature description. A discussion of many existing methods in both fields can be found in the extensive survey published by Tuytelaars and
Mikolajczyk [18]. Furthermore, a thorough evaluation of several of these methods was published by Gauglitz et al. [17].
Various existing feature detectors extract image corners. Corners serve well as image features as they can be easily identified and their position
can generally be located with good accuracy. Furthermore, image corners can still be identified as such if the image is rotated, or the scale or
scene illumination are changed. Hence, a reliable corner detector can provide features with high repeatability.
Figure 4: (a) Input image and features form (b) Harris detector, (c) FAST and (d) SURF. [13]
One less recent but still popular method for corner detection is the Harris detector (Harris and Stephens, 1988). An example for the performance of
this method can be seen in Figure 4b. A computationally less expensive method for detecting image corners is the Smallest Univalue Segment
Assimilating Nucleus (SUSAN) detector that was proposed by Smith and Brady [19].
(b)
(c)
(d)
(a)
6/10
A more advanced method that is similar to the SUSAN detector is Features from Accelerated Segment Test (FAST), for which an example is
shown in Figure 4c. One of the most influential methods in this category is the Scale Invariant Feature Transform (SIFT) by Lowe [20]. For this
method, two Gaussian convolutions with different values for σ are computed for the input image.
A more time-efficient blob detector that was inspired by SIFT, is Speeded-Up Robust Features (SURF) by Bay et al. [21], for which an example is
shown in Figure 4d. Instead of using a DoG for detecting feature locations, Bay et al. rely on the determinant of the Hessian matrix, which is known
from the Hessian-Laplace detector (Mikolajczyk and Schmid [22]). Both SIFT and SURF exhibit a very high repeatability, as it has been shown by
Gauglitz et al. [17]. However, what Gauglitz et al. also have demonstrated is that both methods require significant computation time.
In this research we are going to address this gap as well, by designing a slightly modified architecture of FAST corner detection algorithm.
(4) Modeling the overall framework:
These three key elements of our trajectory estimation system will be supplemented by other filters, snippets and SfM modules namely the
Simultaneous Localization and Mapping (SLAM) as depicted in Figure 5.
Figure 5: Processing pipeline of the our proposed FPGA-implementation
The overall architecture will be synthesized on reconfigurable hardware, consisting of field programmable gate arrays (FPGAs) [23], [24]. These
platforms promise to be adequate system building block in the building of sophisticated devices at affordable cost. They offer heavy parallelism
capabilities, considerable gate counts, and comes in low-power packages [25], [26], [27], [28].
Based on the existing work limitations, this project is concerned with an efficient implementation of trajectroy estimation for automous navigation of
unmanned vehicles, with a special interest on aerial ones. Figure 6 shows the anatomy of the projected overall system to be implemented. It is our
aspired target to realize such architecture and put it into application in different fields of life, like in aerial imaging, shipping parcels, search &
reconnaissance mission and many more. Moreover, this project will avail us of a locally-built solution that will not be bound to foreign royalties or at
risk of patent infringements claims.
Figure 6: System implementation of the processing at the MAV physical level
(b) References
[1] S. Palmisano, B. Gillam, D. G. Govan, R. S. Allison, and J. M. Harris, "Stereoscopic perception of real depths at large distances," Journal
of Vision, vol. 10, no. 6, pp. 19–19, Jun. 2010.
[2] R. Klette, Concise Computer Vision, 2014th ed. London: Springer.
[3] D. C. Brown, "Decentering distortion of lenses," Photometric Engineering, vol. 32, no. 3, pp. 444–462, 1966.
[4] Itseez, "OpenCV," 2015. [Online]. Available: http://opencv.org. Accessed: Apr. 2, 2016.
[5] J. Y. Bouguet, "Camera Calibration Toolbox for MATLAB," 2013. [Online]. Available: http://vision.caltech.edu/. Accessed: Mar. 3, 2016.
[6] M. Achtelik, T. Zhang, K. Kuhnlenz, and M. Buss, "Visual tracking and control of a quadcopter using a stereo camera system and inertial
sensors," IEEE, 2012, pp. 2863–2869.
[7] D. Pebrianti, F. Kendoul, S. Azrad, W. Wang, And K. Nonami, "Autonomous hovering and landing of a Quad-rotor micro aerial vehicle by
means of on ground stereo vision system," Journal of System Design and Dynamics, vol. 4, no. 2, pp. 269–284, 2010.
Feature Detection Stereo Matching Local SLAM EKF Sensor Fusion
Low-Level Process High-Level Process
PoseEstimation
low-level flight
control software
Microprocessor Board
FPGA SoC
PID Controller
Serial
Link
IMU
Pose
Attitude
PIXHAWK Cheetah
Greyscale Cameras Propellers of the MAV
Baseline = 11cm
I2
C
Bus
USB
Port
Quadrotor
(4) Motors
Controller
7/10
[8] L. R. García Carrillo, A. E. Dzul López, R. Lozano, and C. Pégard, "Combining stereo vision and inertial navigation system for a Quad-
Rotor UAV," Journal of Intelligent & Robotic Systems, vol. 65, no. 1-4, pp. 373–387, Aug. 2011.
[9] T. Tomic et al., "Toward a fully autonomous UAV: Research platform for indoor and outdoor urban search and rescue," IEEE Robotics &
Automation Magazine, vol. 19, no. 3, pp. 46–56, Sep. 2012.
[10] A. Harmat, I. Sharf, and M. Trentini, "Parallel tracking and mapping with multiple cameras on an unmanned aerial vehicle," in Intelligent
Robotics and Applications. Springer Science + Business Media, 2012, pp. 421–432.
[11] M. Nieuwenhuisen, D. Droeschel, J. Schneider, D. Holz, T. Läbe, and S. Behnke, "Multimodal obstacle detection and collision avoidance
for micro aerial vehicles," IEEE, pp. 12–7.
[12] S. Shen, Y. Mulgaonkar, N. Michael, and V. Kumar, "Vision-based state estimation for autonomous rotorcraft MAVs in complex
environments," IEEE, 2010, pp. 1758–1764.
[13] K. Schauwecker and A. Zell, "On-board dual-stereo-vision for the navigation of an autonomous MAV," Journal of Intelligent & Robotic
Systems, vol. 74, no. 1-2, pp. 1–16, Oct. 2013.
[14] K. Schauwecker and R. Klette, "A comparative study of Two vertical road Modelling techniques," in Computer Vision – ACCV 2010
Workshops. Springer Science + Business Media, 2011, pp. 174–183.
[15] K. Schauwecker, S. Morales, S. Hermann, and R. Klette, "A comparative study of stereo-matching algorithms for road-modeling in the
presence of windscreen wipers," IEEE, 2009, pp. 12–7.
[16] C. Schmid, R. Mohr, and C. Bauckhage, International Journal of Computer Vision, vol. 37, no. 2, pp. 151–172, 2000.
[17] S. Gauglitz, T. Höllerer, and M. Turk, "Evaluation of interest point detectors and feature Descriptors for visual tracking," International
Journal of Computer Vision, vol. 94, no. 3, pp. 335–360, Mar. 2011.
[18] T. Tuytelaars and K. Mikolajczyk, "Local invariant feature detectors: A survey," Foundations and Trends® in Computer Graphics and
Vision, vol. 3, no. 3, pp. 177–280, 2007.
[19] S. M. Smith and J. M. Brady, International Journal of Computer Vision, vol. 23, no. 1, pp. 45–78, 1997.
[20] D. G. Lowe, "Object recognition from local scale-invariant features," vol. 2, IEEE, 1999, pp. 1150–2.
[21] Bay, Herbert, Tinne Tuytelaars, and Luc Van Gool. “SURF: Speeded up Robust Features.” Lecture Notes in Computer Science. N.p.:
Springer Science + Business Media, 2006. 404–417. Web.
[22] Mikolajczyk, K., and C. Schmid. Indexing based on scale invariant interest points. IEEE, 2001. Web. 4 Apr. 2016.
[23] P. Dang, “VLSI architecture for real-time image and video processing systems," Journal of Real-Time Image Processing, vol. 1, pp. 57–
62, 2006.
[24] T. Todman, G. Constantinides, S. Wilton, O. Mencer, W. Luk, and P. Cheung, “Reconfigurable computing: architectures and design
methods," Computers and Digital Techniques, IEE Proceedings, Vol. 152, No. 2, pp. 193–207, 2005.
[25] A. Ahmad, B. Krill, A. Amira, and H. Rabah, “Efficient architectures for 3-D HWT using dynamic partial reconfiguration," Journal of
Systems Architecture, Vol. 56, No. 8, pp. 305–316, 2010.
[26] A. Ahmad, B. Krill, A. Amira, and H. Rabah, “3-D Haar wavelet transform with dynamic partial reconfiguration for 3-D medical image
compression," in Biomedical Circuits and Systems Conference, 2009. BioCAS 2009. IEEE, 2009, pp. 137–140.
[27] A. Ahmad and A. Amira, “Efficient reconfigurable architectures for 3-D medical image compression," in Field-Programmable Technology,
2009. FPT 2009. International Conference on, 2009, pp. 472–474.
[28] B. Krill, A. Ahmad, A. Amira, and H. Rabah, “An efficient FPGA-based dynamic partial reconfiguration design flow and environment for
image and signal processing IP cores," Signal Processing: Image Communication, Vol. 25, No. 5, pp. 377–387, 2010.
8/10
Metodologi
Methodology
1. Overview
In our research approach, we decided to make MATLAB our main research workhorse instead of a direct Hardware Description Language (HDL)
prototyping strategy which has proven to be laborious, unreliable and rather obsolete.
This choice of MATLAB remains evident after we examined and concluded that the bulk of our two computer vision techniques (i.e: Stereo Vision Depth
and SfM) consist of heavy mathematical processes and algorithms.
The two processes can be intertwined very efficiently in Simulink, where we can graphically abstract, visualize and reconfigure both systems chain,
then handover the task to MATLAB for further research.
We intend to build our prototype SoC around a Xilinx mid-range FPGA. These are known to have a better support for the MATLAB integration, and for
having advanced computer vision readiness, especially with the release of the new Xilinx Design Suite renamed “Vivado”, and driven by the power of
Xilinx Development Boards.
2. Project Flow Graph
In a step-wise outline, we are planning the following research methodology path, as shown in Figure 7.
Figure 7: Project flow graph
Identification of the major
components of the Safe Trajectory
Estimation block
Dissection: break in the main
algorithms into discrete fragments
(to be compatible for RTL design)
Virtualization: write each fragment
on MATLAB (plain code) / Simulink
(GUI)
Validation: verify the execution of
each algorithm using MATLAB /
Simulink
Simulation: run a test bench for
the critical situations encountered
Assemble & Deploy the resulting
design into one single SoC
deliverable prototype
Design & Synthesis: define the
way to export into FPGA fabric
PASS
Benchmarking: perform
measurements and comparisons in
order to gauge the efficiency
(metrics) of our SW / HW solutions
No
Yes
9/10
Hasil Penyelidikan
Research Outcomes
1. Jangkaan Hasil Kajian
Expected Research Outcomes
The autonomous navigation of unmanned vehicles in general is the end result that this research aspires to achieve. Once it reaches its completion, this
research project is expected also to lead to the following results and deliverables:
(1) A full-fledged System-on-Chip for safe trajectory estimation for autonomous driving of unmanned vehicles.
(2) An advanced on-board architecture to navigate unmanned vehicles without human intervention.
(3) Optimized execution of Stereo Vision Depth and SfM processes on hardware platform.
(4) Defining of technical limitations in the Stereo Vision Depth and SfM algorithms for hardware platforms.
(5) Elaboration of novel techniques to identify surrounding objects using computer vision.
(6) A description of the system taxonomy with a set of recommendations on best design practices for subsequent works.
2. Hasil Kajian Terkini
Latest Research Outcomes
As indicated in the methodology chart, the first key task in our research was the identification of the different components of the system and how they
correlate together. This was one of the primordial works in any design flow, which required drafting the block diagram that will compose the backbone of
our overall system.
Based on the theory and literature experience accumulated during the first period of this research, we have came out will an all-encompassing block
diagram that is workable and which exhibit high coherence between the two main modules: Stereo Vision Depth and SfM within its anatomy.
This configuration in Figure 8 intertwines both core modules with the peripheral of the unmanned vehicle and with the other essential mathematical and
control processes. Figure 8 shows how this block diagram has been designed.
Figure 8: Block diagram of the overall design
As discussed previously, we have set up two cameras in a stereo configuration with 11cm of baseline. To begin using MATLAB in the process of
recovering depth from camera images, we elaborated the following code, that compute and compare two or more views of the same scene.
For the experimentation purpose, we have taken two still pictures of the VASYD Lab at UTHM (Malaysia), using our camera. The pictures were of the
same scene, but taken from two points of view, 11cm apart in the horizontal line, just like it would be if we had two camera mounted in a stereo
configuration. The output of this experimentation is a 3-D point cloud, where each 3-D point corresponds to a pixel in one of the images.
Stereo image rectification projects images onto a common image plane in such a way that the corresponding points have the same row coordinates.
This process is useful for stereo vision, because the 2-D stereo correspondence problem reduces to a 1-D problem:
Onboard
Controller
IMU
Camera 1
Stereo Vision Depth
Camera 2
Obstacle Avoidance
EKF Data
Fusion
SLAM Planner
LQR
Controller
Structure
from Motion FPGA
UAV
10/10
VASYD_left.jpg  VASYD_right.jpg 
(1) Load the stereoParameters object, which is the result of calibrating the camera using either the stereoCameraCalibrator app or the
estimateCameraParameters function:
% Load the stereoParameters object.
load('VASYDStereoParams.mat');
% Visualize camera extrinsics.
showExtrinsics(stereoParams);
(2) Create Video File Readers and the Video Player:
Create System Objects for reading and displaying the video.
videoFileLeft = 'VASYD_left.avi';
videoFileRight = 'VASYD_right.avi';
readerLeft = vision.VideoFileReader(videoFileLeft, 'VideoOutputDataType', 'uint8');
readerRight = vision.VideoFileReader(videoFileRight, 'VideoOutputDataType', 'uint8');
player = vision.DeployableVideoPlayer('Location', [20, 400]);
(3) Read and Rectify Video Frames:
The frames from the left and the right cameras must be rectified in order to compute disparity and reconstruct the 3-D scene. Rectified images have
horizontal epipolar lines, and are row-aligned. This simplifies the computation of disparity by reducing the search space for matching points to one
dimension. Rectified images can also be combined into an anaglyph, which can be viewed using the stereo red-cyan glasses to see the 3-D effect.
frameLeft = readerLeft.step();
frameRight = readerRight.step();
[frameLeftRect, frameRightRect] = rectifyStereoImages(frameLeft, frameRight, stereoParams);
figure;
imshow(stereoAnaglyph(frameLeftRect, frameRightRect));
title('Rectified Video Frames');
(4) Compute Disparity:
In rectified stereo images any pair of corresponding points are located on the same pixel row. For each pixel in the left image compute the distance to
the corresponding pixel in the right image. This distance is called the disparity, and it is proportional to the distance of the corresponding world point
from the camera.
frameLeftGray = rgb2gray(frameLeftRect);
frameRightGray = rgb2gray(frameRightRect);
disparityMap = disparity(frameLeftGray, frameRightGray);
figure;
imshow(disparityMap, [0, 64]);
title('Disparity Map');
colormap jet
colorbar
(5) Reconstruct the 3-D Scene:
Reconstruct the 3-D world coordinates of points corresponding to each pixel from the disparity map.
points3D = reconstructScene(disparityMap, stereoParams);
% Convert to meters and create a pointCloud object
points3D = points3D ./ 1000;
ptCloud = pointCloud(points3D, 'Color', frameLeftRect);
% Create a streaming point cloud viewer
player3D = pcplayer([-3, 3], [-3, 3], [0, 8], 'VerticalAxis', 'y', 'VerticalAxisDir', 'down');
% Visualize the point cloud
view(player3D, ptCloud);
(6) Process the Rest of the Video:
Apply the steps described above to in every frame of the video, when we will be using a frame grabber which is not the case in this experimentation.
11/10
while ~isDone(readerLeft) && ~isDone(readerRight)
% Read the frames.
frameLeft = readerLeft.step();
frameRight = readerRight.step();
% Rectify the frames.
[frameLeftRect, frameRightRect] = rectifyStereoImages(frameLeft, frameRight, stereoParams);
% Convert to grayscale.
frameLeftGray = rgb2gray(frameLeftRect);
frameRightGray = rgb2gray(frameRightRect);
% Compute disparity.
disparityMap = disparity(frameLeftGray, frameRightGray);
% Reconstruct 3-D scene.
points3D = reconstructScene(disparityMap, stereoParams);
points3D = points3D ./ 1000;
ptCloud = pointCloud(points3D, 'Color', frameLeftRect);
view(player3D, ptCloud);
% Display the frame.
step(player, dispFrame);
end
% Clean up.
reset(readerLeft);
reset(readerRight);
release(player);
Kemajuan Penyelidikan
Research Progress
BAB
(CHAPTER)
PERATUS SIAP
(PERCENTAGE OF COMPLETION)
CATATAN
(REMARKS)
INTRODUCTION
LITERATURE REVIEW
RESEARCH METHODOLOGY
RESULTS & ANALYSIS
CONCLUSION
50 %
70 %
40 %
20 %
10 %
The final thesis introduction will take most of its source from this original
introduction, plus some final editing.
The literature review is an ongoing process, and we expect this chapter
to be edited frequently as with every advent around photogrammetry.
Our methodology flowchart has been set up, and the algorithms
conception and virtualization did start, in parallel with their evaluation.
The result & analysis advancement so far are in the form of block
diagrams identification and few MATLAB experiments.
Nota: Sila masukkan lampiran jika ruangan tidak mencukupi
Note: Please use an attachment if the provided space is not enough
Masalah Yang Memberi Kesan Kepada Kemajuan Penyelidikan Dan Langkah Yang Diambil Bagi Mengatasi Masalah Tersebut.
Problems That Affect Research Progress And Remedial Actions Taken To Resolve It
The research has just rolled out, but this part is by far the most decisive because here we will define our direction for the next year. The problem in that
sens, was to come to a decision about the right approach to use for our safe trajectory estimation project. The whole project can fall apart if we don’t
pick carefully the techniques and components to be used, and if we fall to over-ambitious and wishful thinking.
To be sure we were on the right path, we have reviewed tens of similar projects done around the world, and determined where they have stalled in the
design process, and what errors have they committed, so that we don’t make them in ours.
Their recommendations regarding the importance of the stereo image stability, offline localization and power consumption was deterministic for us to
choose the Structure from Motion technique as a way to mitigate the irregularities that may be handed by Stereo Vision Depth technique when working
alone. For the issue of power consumption, we were tempted to try a novel configuration of Compressed Sensing (CS) using Orthogonal Matching
Pursuit which researchers have successfully integrated on FPGA.
A trivial problem however was on the technicalities of my MATLAB version, as I had to find and download a handful of missing libraries and a particular
toolbox that is essential for the Stereo Vision Depth technique. I also had to make sure there was no litigant matter regarding the patenting right of the
authors of those libraries, because it may cause royalties paying or compensations if we come to use them in our FPGA prototype without consent of
their intellectual ownership body.
12/10
The same problem was taken care of, when it came to the standard stereo images to use to calibrate or benchmark our Stereo Vision Depth algorithm.
These images are recognized by the computer vision community of researchers as ideal to test, gauge and compare the Stereo Vision modules, but
their usage should be free and legal before we can use them in our test bench in future.
The problem of finding the right baseline between the two cameras was also tackled. We had to resort to a nuts-and-bolt reasoning of the stereo
camera configuration, and tune the distance until we get an acceptable and close to accurate distance determined between the camera plane and the
objects being investigated on the scene.
Putting the MATLAB code together was another ordeal, because of the many issues pertaining to mathematical multiplications of matrix versus vectors.
This goes back to a 3-D or a 2-D multiplication. It was primordial to know which multiplication is which, in order to obtain the valid result, and not be
tricked by a wrong result that would seem correct as well.
BAHAGIAN D
PART D
AKTIVITI PELAJAR
STUDENT ACTIVITIES
Pembentangan Kertas Kerja, Menghadiri Seminar, dll.
Papers presented, seminar attended, etc.
1. FKEE Hari Transformasi Minda:
- Poster Presentation: With a poster titled "Simulation & Analysis of Different DCT Techniques for Image Compression on MATLAB"
2. Publisher's Talk: Research Best Practices (Dr Wong Woei Fuh) @UTM Skudai
3. Chairman Lecture Series : "The Importance of Practical Engineer In The Industry" @Sultan Ibrahim Banquet Hall, DSI, UTHM
4. Malaysian Technical Universities Conference on Engineering & Technology 2015 (MUCET2015) @KSL JB (for attending the keynote speeches)
5. Short course : Health Monitoring of Civil Structure @UTM Skudai (Faculty of Biomedical)
6. Making HR Technology Relevant to Your Organization @Thistle Hotel JB
7. SolidWorks Innovation Day @Malacca (full-day training)
8. WIEF 2015 (11th World Islamic Economic Forum) - as a delegate representing Morocco @KLCC
9. 2nd IdeaPad (side event of WIEF 2015) - as a presenter of my PowerKasut non-credit project @KLCC
10. Impact & Insights Dialogue: Making an Impact on Education by Hong Leong Foundation @KL
11. 1 AES (ASEAN Entrepreneurship Summit 2015) - in conjunction with ASEAN Summit in Malaysia @KL
12. Social Entrepreneurship Bootcamp by MaGIC (a side event of 1AES) - 3-days workshop where we transform a social idea into business model.
13. CEO Faculty Programme - A talk by Dato' Wei Chuan Beng, Founder of RedONE : The Journey to Entrepreneurship @UTHM
14. Week of LabVIEW Webcast Series (5 sessions of 30 mins each) @National Instruments ASEAN
15. Talk by Prof Simone Hochgrab (Cambridge Univ.): Advances in reacting flow measurements @UTM Skudai
16. Workshop: Characteristic of A Good Literature Review by Prof Abu Bakar bin Mohammad @UTM Skudai (FKE)
17. UTHM Chairman Lecture Series: Building Info Modelling in Facilities Management Perspectives (By Director of Microcorp Technology Sdn Bhd)
18. Transformasi Minda Mahasiswa: Course on Design Technique for 3-D Printing @UTHM (FKEE)
19. How to Finish your Master or PhD Without Correction @Seminar Room, ORICC, UTHM
20. 2016 Offshore Technology Conference Asia (OTC Asia) @KLCC (as a visitor)
21. Wacana Ilmu Siri 1: Understanding Scopus, Google Scholar And ISI Web Of Science @UTHM
22. Seminar Pemeriksa Luar FKEE: Technopreneurship - From Student Project to Startup to Public Listed Company (by Prof Ahmad Fadzli Hani)
23. 11th ITU Symposium on ICT, Environment and Climate Change @Renaissance Hotel Kuala Lumpur
24. Datacloud South East Asia Forum @Zon Regency Hotel, JB
25. Talk by Ir. Shaik Abdul Wahab bin Dato Hj Rahim Director of GEA Sdn Bhd: Site Investigation @UTHM
26. International Seminar on Power and Communication Engineering (iSPACE2016) by FKEE @UTHM
27. 1st FKEE PG Research Conference (1st
FKEE PG ResConf) by FKEE @UTHM (Presenting an Article, Poster, and an Oral Presentation)
13/10
Kegiatan Bukan Akademik :
Non-Academic Activities
1. Youth Trailblazers Challenge 2015 @UTM Skudai
2. ALIC (Arabic Language Intensive Course) - I teach Arabic language basics to a class for a 6-hours day class @UTM Skudai (Faculty of Islamic
Civilization)
3. Malaysians United Run 2015 - Anjuran Institut Onn Jaafar (IOJ) @KL
4. Kolej Kediaman Perwira's Festival Keamanan:
- Bengkel Bahasa Perancis - I teach French basics for a 2-hours night class
5. ICMF 2015 (International Cultural Mega Festival) - as an organizing commitee member @UTM Skudai
6. Kolej Kediaman Perwira's Aktiviti "Gembur Kasih 5.0" @KK Perwira Taman Simbiosis
7. UTHM International Cultural Evening @Sultan Ibrahim Banquet Hall, DSI, UTHM
8. FKEE Jalinan Muafakat 2.0 @Padang B (Padang Ragbi), UTHM
9. Temasya Sukan Perwira @UTHM Stadium
10. UTHM Radio: Invited 3 times to talk (3 hours slots each):
- Sharing tips on how to improve English, and my experience being abroad.
Penganugerahan / Penghargaan :
Recognitions / Awards
1. FKEE Hari Transformasi Minda:
- 2 Minutes Idea: Winner of both 1st & 2nd Place.
2. Kolej Kediaman Perwira's Festival Keamanan:
- Larian Keamanan - I arrived in 4th position in this cross country run around KK Perwira vicinities.
3. 3 Minutes Thesis Competition 2016:
- Winner of 1st Place (Master Students Category)
4. Pidato Antarabangsa Bahasa Melayu Piala Perdana Menteri (PABM) in Putrajaya:
- Top 15 in Malaysia (International Students Category)
5. 2nd Kazan OIC Entrepreneurship Forum (International Competition) in Kazan, Republic of Tatarstan, Russia:
- Selected to pitch in front of the president of the Republic of Tatarstan.

More Related Content

Similar to 6 [progress report] for this leisurely side-project I was doing in 2016

Engineering@SotonPoster
Engineering@SotonPosterEngineering@SotonPoster
Engineering@SotonPosterchang liu
 
SFScon 2020 - Alex Bojeri - BLUESLEMON project autonomous UAS for landslides ...
SFScon 2020 - Alex Bojeri - BLUESLEMON project autonomous UAS for landslides ...SFScon 2020 - Alex Bojeri - BLUESLEMON project autonomous UAS for landslides ...
SFScon 2020 - Alex Bojeri - BLUESLEMON project autonomous UAS for landslides ...
South Tyrol Free Software Conference
 
Business prezentaciya az-en
Business prezentaciya az-enBusiness prezentaciya az-en
Business prezentaciya az-enOleg Kupervasser
 
Towards An Open Instrumentation Platform: Getting The Most From MAVLink, Ardu...
Towards An Open Instrumentation Platform: Getting The Most From MAVLink, Ardu...Towards An Open Instrumentation Platform: Getting The Most From MAVLink, Ardu...
Towards An Open Instrumentation Platform: Getting The Most From MAVLink, Ardu...
Steve Arnold
 
Imaging automotive 2015 addfor v002
Imaging automotive 2015   addfor v002Imaging automotive 2015   addfor v002
Imaging automotive 2015 addfor v002
Enrico Busto
 
Imaging automotive 2015 addfor v002
Imaging automotive 2015   addfor v002Imaging automotive 2015   addfor v002
Imaging automotive 2015 addfor v002
Enrico Busto
 
E04502025030
E04502025030E04502025030
E04502025030
ijceronline
 
Eclipse RT Day
Eclipse RT DayEclipse RT Day
Eclipse RT Day
Brett Hackleman
 
Advanced Remote Air-Ground (RAG) System
Advanced Remote Air-Ground (RAG) SystemAdvanced Remote Air-Ground (RAG) System
Advanced Remote Air-Ground (RAG) System
Secretaria de Aviação Civil da Presidência da República
 
Aerospace defensetechs
Aerospace  defensetechsAerospace  defensetechs
Aerospace defensetechsalancabe
 
Vlsics040307DESIGN AND IMPLEMENTATION OF CAR PARKING SYSTEM ON FPGA
Vlsics040307DESIGN AND IMPLEMENTATION OF CAR PARKING SYSTEM ON FPGAVlsics040307DESIGN AND IMPLEMENTATION OF CAR PARKING SYSTEM ON FPGA
Vlsics040307DESIGN AND IMPLEMENTATION OF CAR PARKING SYSTEM ON FPGA
VLSICS Design
 
Skynet Week 3 H4D Stanford 2016
Skynet Week 3 H4D Stanford 2016Skynet Week 3 H4D Stanford 2016
Skynet Week 3 H4D Stanford 2016
Stanford University
 
IRJET- Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...
IRJET-  	  Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...IRJET-  	  Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...
IRJET- Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...
IRJET Journal
 
Autonomous Vehicle Navigation
Autonomous Vehicle NavigationAutonomous Vehicle Navigation
Autonomous Vehicle NavigationTorben Haagh
 
Sonar based obstacle avoidance for UAVs
Sonar based obstacle avoidance for UAVsSonar based obstacle avoidance for UAVs
Sonar based obstacle avoidance for UAVs
gaurav dhir
 
Portofolio Control Version SN
Portofolio Control Version SNPortofolio Control Version SN
Portofolio Control Version SN
Samuel Narcisse
 
Robocup2006
Robocup2006Robocup2006
OSGi: Best Tool In Your Embedded Systems Toolbox
OSGi: Best Tool In Your Embedded Systems ToolboxOSGi: Best Tool In Your Embedded Systems Toolbox
OSGi: Best Tool In Your Embedded Systems Toolbox
Brett Hackleman
 
An Intelligent Mobile Robot Navigation Technique Using RFID Technology
An Intelligent Mobile Robot Navigation Technique Using RFID TechnologyAn Intelligent Mobile Robot Navigation Technique Using RFID Technology
An Intelligent Mobile Robot Navigation Technique Using RFID Technologywacerone
 

Similar to 6 [progress report] for this leisurely side-project I was doing in 2016 (20)

Engineering@SotonPoster
Engineering@SotonPosterEngineering@SotonPoster
Engineering@SotonPoster
 
SFScon 2020 - Alex Bojeri - BLUESLEMON project autonomous UAS for landslides ...
SFScon 2020 - Alex Bojeri - BLUESLEMON project autonomous UAS for landslides ...SFScon 2020 - Alex Bojeri - BLUESLEMON project autonomous UAS for landslides ...
SFScon 2020 - Alex Bojeri - BLUESLEMON project autonomous UAS for landslides ...
 
Business prezentaciya az-en
Business prezentaciya az-enBusiness prezentaciya az-en
Business prezentaciya az-en
 
Paper
PaperPaper
Paper
 
Towards An Open Instrumentation Platform: Getting The Most From MAVLink, Ardu...
Towards An Open Instrumentation Platform: Getting The Most From MAVLink, Ardu...Towards An Open Instrumentation Platform: Getting The Most From MAVLink, Ardu...
Towards An Open Instrumentation Platform: Getting The Most From MAVLink, Ardu...
 
Imaging automotive 2015 addfor v002
Imaging automotive 2015   addfor v002Imaging automotive 2015   addfor v002
Imaging automotive 2015 addfor v002
 
Imaging automotive 2015 addfor v002
Imaging automotive 2015   addfor v002Imaging automotive 2015   addfor v002
Imaging automotive 2015 addfor v002
 
E04502025030
E04502025030E04502025030
E04502025030
 
Eclipse RT Day
Eclipse RT DayEclipse RT Day
Eclipse RT Day
 
Advanced Remote Air-Ground (RAG) System
Advanced Remote Air-Ground (RAG) SystemAdvanced Remote Air-Ground (RAG) System
Advanced Remote Air-Ground (RAG) System
 
Aerospace defensetechs
Aerospace  defensetechsAerospace  defensetechs
Aerospace defensetechs
 
Vlsics040307DESIGN AND IMPLEMENTATION OF CAR PARKING SYSTEM ON FPGA
Vlsics040307DESIGN AND IMPLEMENTATION OF CAR PARKING SYSTEM ON FPGAVlsics040307DESIGN AND IMPLEMENTATION OF CAR PARKING SYSTEM ON FPGA
Vlsics040307DESIGN AND IMPLEMENTATION OF CAR PARKING SYSTEM ON FPGA
 
Skynet Week 3 H4D Stanford 2016
Skynet Week 3 H4D Stanford 2016Skynet Week 3 H4D Stanford 2016
Skynet Week 3 H4D Stanford 2016
 
IRJET- Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...
IRJET-  	  Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...IRJET-  	  Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...
IRJET- Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...
 
Autonomous Vehicle Navigation
Autonomous Vehicle NavigationAutonomous Vehicle Navigation
Autonomous Vehicle Navigation
 
Sonar based obstacle avoidance for UAVs
Sonar based obstacle avoidance for UAVsSonar based obstacle avoidance for UAVs
Sonar based obstacle avoidance for UAVs
 
Portofolio Control Version SN
Portofolio Control Version SNPortofolio Control Version SN
Portofolio Control Version SN
 
Robocup2006
Robocup2006Robocup2006
Robocup2006
 
OSGi: Best Tool In Your Embedded Systems Toolbox
OSGi: Best Tool In Your Embedded Systems ToolboxOSGi: Best Tool In Your Embedded Systems Toolbox
OSGi: Best Tool In Your Embedded Systems Toolbox
 
An Intelligent Mobile Robot Navigation Technique Using RFID Technology
An Intelligent Mobile Robot Navigation Technique Using RFID TechnologyAn Intelligent Mobile Robot Navigation Technique Using RFID Technology
An Intelligent Mobile Robot Navigation Technique Using RFID Technology
 

More from Youness Lahdili

Building a Movie Success Predictor
Building a Movie Success PredictorBuilding a Movie Success Predictor
Building a Movie Success Predictor
Youness Lahdili
 
7 [single-page slide] - My attempt at understanding Augmented Reality
7 [single-page slide] - My attempt at understanding Augmented Reality7 [single-page slide] - My attempt at understanding Augmented Reality
7 [single-page slide] - My attempt at understanding Augmented Reality
Youness Lahdili
 
6 [single-page slide] - Conception of an Autonomous UAV using Stereo Vision
6 [single-page slide] - Conception of an Autonomous UAV using Stereo Vision6 [single-page slide] - Conception of an Autonomous UAV using Stereo Vision
6 [single-page slide] - Conception of an Autonomous UAV using Stereo Vision
Youness Lahdili
 
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...
Youness Lahdili
 
5 - Anthology on the Ethical Issues in Engineering Practice (presented in a M...
5 - Anthology on the Ethical Issues in Engineering Practice (presented in a M...5 - Anthology on the Ethical Issues in Engineering Practice (presented in a M...
5 - Anthology on the Ethical Issues in Engineering Practice (presented in a M...
Youness Lahdili
 
4 - Simulation and analysis of different DCT techniques on MATLAB (presented ...
4 - Simulation and analysis of different DCT techniques on MATLAB (presented ...4 - Simulation and analysis of different DCT techniques on MATLAB (presented ...
4 - Simulation and analysis of different DCT techniques on MATLAB (presented ...
Youness Lahdili
 
3 - A critical review on the usual DCT Implementations (presented in a Malays...
3 - A critical review on the usual DCT Implementations (presented in a Malays...3 - A critical review on the usual DCT Implementations (presented in a Malays...
3 - A critical review on the usual DCT Implementations (presented in a Malays...
Youness Lahdili
 
4 - Simulation and analysis of different DCT techniques on MATLAB (presented ...
4 - Simulation and analysis of different DCT techniques on MATLAB (presented ...4 - Simulation and analysis of different DCT techniques on MATLAB (presented ...
4 - Simulation and analysis of different DCT techniques on MATLAB (presented ...
Youness Lahdili
 
2 - Generation of PSK signal using non linear devices via MATLAB (presented i...
2 - Generation of PSK signal using non linear devices via MATLAB (presented i...2 - Generation of PSK signal using non linear devices via MATLAB (presented i...
2 - Generation of PSK signal using non linear devices via MATLAB (presented i...
Youness Lahdili
 
1 [single-page slide] - My concept of project presented for NI GSDA Award
1 [single-page slide] - My concept of project presented for NI GSDA Award1 [single-page slide] - My concept of project presented for NI GSDA Award
1 [single-page slide] - My concept of project presented for NI GSDA Award
Youness Lahdili
 
1 - My concept of project presented for NI GSDA Award (selected as one of 8 f...
1 - My concept of project presented for NI GSDA Award (selected as one of 8 f...1 - My concept of project presented for NI GSDA Award (selected as one of 8 f...
1 - My concept of project presented for NI GSDA Award (selected as one of 8 f...
Youness Lahdili
 

More from Youness Lahdili (11)

Building a Movie Success Predictor
Building a Movie Success PredictorBuilding a Movie Success Predictor
Building a Movie Success Predictor
 
7 [single-page slide] - My attempt at understanding Augmented Reality
7 [single-page slide] - My attempt at understanding Augmented Reality7 [single-page slide] - My attempt at understanding Augmented Reality
7 [single-page slide] - My attempt at understanding Augmented Reality
 
6 [single-page slide] - Conception of an Autonomous UAV using Stereo Vision
6 [single-page slide] - Conception of an Autonomous UAV using Stereo Vision6 [single-page slide] - Conception of an Autonomous UAV using Stereo Vision
6 [single-page slide] - Conception of an Autonomous UAV using Stereo Vision
 
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...
 
5 - Anthology on the Ethical Issues in Engineering Practice (presented in a M...
5 - Anthology on the Ethical Issues in Engineering Practice (presented in a M...5 - Anthology on the Ethical Issues in Engineering Practice (presented in a M...
5 - Anthology on the Ethical Issues in Engineering Practice (presented in a M...
 
4 - Simulation and analysis of different DCT techniques on MATLAB (presented ...
4 - Simulation and analysis of different DCT techniques on MATLAB (presented ...4 - Simulation and analysis of different DCT techniques on MATLAB (presented ...
4 - Simulation and analysis of different DCT techniques on MATLAB (presented ...
 
3 - A critical review on the usual DCT Implementations (presented in a Malays...
3 - A critical review on the usual DCT Implementations (presented in a Malays...3 - A critical review on the usual DCT Implementations (presented in a Malays...
3 - A critical review on the usual DCT Implementations (presented in a Malays...
 
4 - Simulation and analysis of different DCT techniques on MATLAB (presented ...
4 - Simulation and analysis of different DCT techniques on MATLAB (presented ...4 - Simulation and analysis of different DCT techniques on MATLAB (presented ...
4 - Simulation and analysis of different DCT techniques on MATLAB (presented ...
 
2 - Generation of PSK signal using non linear devices via MATLAB (presented i...
2 - Generation of PSK signal using non linear devices via MATLAB (presented i...2 - Generation of PSK signal using non linear devices via MATLAB (presented i...
2 - Generation of PSK signal using non linear devices via MATLAB (presented i...
 
1 [single-page slide] - My concept of project presented for NI GSDA Award
1 [single-page slide] - My concept of project presented for NI GSDA Award1 [single-page slide] - My concept of project presented for NI GSDA Award
1 [single-page slide] - My concept of project presented for NI GSDA Award
 
1 - My concept of project presented for NI GSDA Award (selected as one of 8 f...
1 - My concept of project presented for NI GSDA Award (selected as one of 8 f...1 - My concept of project presented for NI GSDA Award (selected as one of 8 f...
1 - My concept of project presented for NI GSDA Award (selected as one of 8 f...
 

Recently uploaded

NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
Amil Baba Dawood bangali
 
Final project report on grocery store management system..pdf
Final project report on grocery store management system..pdfFinal project report on grocery store management system..pdf
Final project report on grocery store management system..pdf
Kamal Acharya
 
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
ssuser7dcef0
 
Fundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptxFundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptx
manasideore6
 
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
obonagu
 
Forklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella PartsForklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella Parts
Intella Parts
 
Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024
Massimo Talia
 
Student information management system project report ii.pdf
Student information management system project report ii.pdfStudent information management system project report ii.pdf
Student information management system project report ii.pdf
Kamal Acharya
 
ML for identifying fraud using open blockchain data.pptx
ML for identifying fraud using open blockchain data.pptxML for identifying fraud using open blockchain data.pptx
ML for identifying fraud using open blockchain data.pptx
Vijay Dialani, PhD
 
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
AJAYKUMARPUND1
 
AP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specificAP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specific
BrazilAccount1
 
block diagram and signal flow graph representation
block diagram and signal flow graph representationblock diagram and signal flow graph representation
block diagram and signal flow graph representation
Divya Somashekar
 
Standard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - NeometrixStandard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - Neometrix
Neometrix_Engineering_Pvt_Ltd
 
Recycled Concrete Aggregate in Construction Part III
Recycled Concrete Aggregate in Construction Part IIIRecycled Concrete Aggregate in Construction Part III
Recycled Concrete Aggregate in Construction Part III
Aditya Rajan Patra
 
14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application
SyedAbiiAzazi1
 
Heap Sort (SS).ppt FOR ENGINEERING GRADUATES, BCA, MCA, MTECH, BSC STUDENTS
Heap Sort (SS).ppt FOR ENGINEERING GRADUATES, BCA, MCA, MTECH, BSC STUDENTSHeap Sort (SS).ppt FOR ENGINEERING GRADUATES, BCA, MCA, MTECH, BSC STUDENTS
Heap Sort (SS).ppt FOR ENGINEERING GRADUATES, BCA, MCA, MTECH, BSC STUDENTS
Soumen Santra
 
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdfTop 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Teleport Manpower Consultant
 
DESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docxDESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docx
FluxPrime1
 
weather web application report.pdf
weather web application report.pdfweather web application report.pdf
weather web application report.pdf
Pratik Pawar
 
Gen AI Study Jams _ For the GDSC Leads in India.pdf
Gen AI Study Jams _ For the GDSC Leads in India.pdfGen AI Study Jams _ For the GDSC Leads in India.pdf
Gen AI Study Jams _ For the GDSC Leads in India.pdf
gdsczhcet
 

Recently uploaded (20)

NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
 
Final project report on grocery store management system..pdf
Final project report on grocery store management system..pdfFinal project report on grocery store management system..pdf
Final project report on grocery store management system..pdf
 
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
 
Fundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptxFundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptx
 
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
 
Forklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella PartsForklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella Parts
 
Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024
 
Student information management system project report ii.pdf
Student information management system project report ii.pdfStudent information management system project report ii.pdf
Student information management system project report ii.pdf
 
ML for identifying fraud using open blockchain data.pptx
ML for identifying fraud using open blockchain data.pptxML for identifying fraud using open blockchain data.pptx
ML for identifying fraud using open blockchain data.pptx
 
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
 
AP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specificAP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specific
 
block diagram and signal flow graph representation
block diagram and signal flow graph representationblock diagram and signal flow graph representation
block diagram and signal flow graph representation
 
Standard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - NeometrixStandard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - Neometrix
 
Recycled Concrete Aggregate in Construction Part III
Recycled Concrete Aggregate in Construction Part IIIRecycled Concrete Aggregate in Construction Part III
Recycled Concrete Aggregate in Construction Part III
 
14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application
 
Heap Sort (SS).ppt FOR ENGINEERING GRADUATES, BCA, MCA, MTECH, BSC STUDENTS
Heap Sort (SS).ppt FOR ENGINEERING GRADUATES, BCA, MCA, MTECH, BSC STUDENTSHeap Sort (SS).ppt FOR ENGINEERING GRADUATES, BCA, MCA, MTECH, BSC STUDENTS
Heap Sort (SS).ppt FOR ENGINEERING GRADUATES, BCA, MCA, MTECH, BSC STUDENTS
 
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdfTop 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
 
DESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docxDESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docx
 
weather web application report.pdf
weather web application report.pdfweather web application report.pdf
weather web application report.pdf
 
Gen AI Study Jams _ For the GDSC Leads in India.pdf
Gen AI Study Jams _ For the GDSC Leads in India.pdfGen AI Study Jams _ For the GDSC Leads in India.pdf
Gen AI Study Jams _ For the GDSC Leads in India.pdf
 

6 [progress report] for this leisurely side-project I was doing in 2016

  • 1. 1/10 BAHAGIAN A PART A PROJECT PROGRESS REPORT Tarikh Pendaftaran Pertama First Date of Registration 8 September 2015 Tarikh Tamat Date of Conclusion 25 January 2017 BAHAGIAN B PART B MAKLUMAT KURSUS COURSE INFORMATION Jumlah Pengecualian Kredit / Kursus Yang Diluluskan Oleh Universiti (jika berkaitan) : Total credit exemption / courses approved by the University (If related) [none] Senarai Kursus Yang Didaftar Semester Ini List of Courses Registered for This Semester Nama Kursus Courses 1) PENYELIDIKAN 2) KAEDAH PENYELIDIKAN Kod Code KEE10100 KEE10200 Kredit Credit 0 0 BAHAGIAN C PART C MAKLUMAT PENYELIDIKAN RESEARCH INFORMATION Tajuk Title FPGA-based Implementation of Safe Trajectory Estimation for Unmanned Vehicles using Photogrammetry Navigation Techniques Latar Belakang Background In the last five years, we witnessed a worldwide emergence of a new class of robots: the unmanned vehicles, from drones (i.e: Unmanned Aerial Vehicles - UAVs), remotely operated underwater vehicles (ROVs), inflatable dirigibles balloons (airships), to driverless cars and all other amateur and hobbyists gadgets in between, like the popularized quadcopters. They are now more affordable than anytime before, and are slowly but surely populating our roads, waters and skies, to the point they will become ubiquitous elements in the traffic and logistics. But till today, these vehicles are only being guided remotely by on-ground human operators who are prone to error and time-conscious. In this study, we want to take the application of unmanned vehicles to a new level, by sparing them the need for a human operator, and making fully autonomous. This will be possible by harnessing the power of two computer vision methods that are essential parts in photogrammetry technology: stereo vision depth and structure from motion (SfM). Our contribution will allow the unmanned vehicle to be auto-aware of the dangers and obstructions that will cross its way without any human intervention. Penyataan Masalah Problem Statements The main issue that is daunting engineers is to how to guide these robots in unrecognizable zones, while saving working hours and stress for ground-operators. For UAVs, the common practice is to drive the unmanned vehicle from distance, where an operator will have a bird's-eye view thought the camera(s) fixed on the unmanned vehicle. But this solution makes the UAV dependent of the human side, who can be prone to error, misjudgement, incompetent and who does not have a full grasp of the actual flight or driving condition. Besides that, the human-operated UAVs have no autonomy of decision, and thus are not able to quickly react and counter abrupt events, which occur very often during navigation. Besides that, these teleguided unmanned vehicles are exposed to some serious technical limitations and threats: (a) They can be stranded and cut-off from communication in GPS-denied areas like tunnels, undergrounds, or when encircled by concrete walls; and (b) In the case of a security breach to their transmission protocol, their communication with ground control can be compromised by malicious signal hijacking, or also by non-deliberate signal interferences. The proposed solution for this problematic of navigation autonomy resides in the design of an intelligent trajectory estimation system. This system will predict and trace a safe itinerary (i.e: planned path) clear from all sorts of obstacles. The unmanned vehicle will then only have to follow the coordinates of this itinerary until reaching its given destination. This itinerary must be continuously auto-adapting itself with the real-time change of environment, traffic movement, and the occurrence of unexpected obstacles, especially the moving ones, as in Figure 1.
  • 2. 2/10 Figure 1: A representation of the proposed UAV obstacle-aware system being put in the field In Figure 1, it is clear that the blue trajectory is the shortest, but it crosses many obstacles. The FPGA fed by images from the two cameras on- board, will determine the red itinerary intelligently, so to closely and safely get around these obstacles, without making the new trajectory too time and resource exhausting, especially for power-conscious vehicles like in this example of the Micro Aerial Vehicle (MAV). A highly promising method to make such trajectory possible is the Stereo Vision Depth, which will procure the unmanned vehicle the ability to appraise the relative distance (i.e: depth map) that separates it from the different objects surrounding it. Then we would write an FPGA routine that continuously refers to that depth map in order to determine where the unmanned vehicle is located with respect to surrounding objects. It then figures out what obstacles are in its path and compute how much it has to steer to avoid them. But the exclusive novelty that we are bringing in our research is the incorporation of a unique technique commonly termed Structure from Motion commonly abbreviated as “SfM”. This set of computer vision algorithms constructs 3-D models of landscapes with considerable accuracy by only analyzing a single 2-D view of the scene in motion. The primary function of SfM in our proposed system is to execute the Simultaneous Localization and Mapping (SLAM) that will ensure our unmanned vehicle is always on track on its safe trajectory heading toward its predetermined destination, and at the same time suggests the optimal route to reach that destination. Matlamat Aim So our solution’s aim is to make these two methods (i.e: stereo vision depth and SfM) coexist on the same System-on-Chip (SoC), and to apply them in real autonomous navigation situation (i.e: test them in the field), so that one method can mitigate the error that the other could have propagated, in the manner of a hybrid system. This implies building a field programmable gate array (FPGA) prototype and mounting it on a binocular mid-range drone for validation and deployment. MATLAB has been selected as our abstraction tool, and Vivado HLS will be the middleware between the HDL and the FPGA. It is expected that this research will yield an FPGA design that is capable of piloting an unmanned vehicle by sensing surrounding dangers toward its path. Skop Scope (a) Range of Hardware Used The specific MAV model we will use is based on the PIXHAWK Cheetah that was developed at the ETH Zürich. The PIXHAWK Cheetah is a Quadcopter (4 rotors). On-board this MAV we can find: 1- A custom-designed microprocessor board that serves as: a- An Inertial Measurement Unit (IMU), and b- A low-level flight controller, for steering the MAV to the desired target waypoint. The low-level flight control software is built around an unmodified PID controller provided by the PIXHAWK project, and which consists of separate pose and attitude controllers. 2- A custom-made carrier board for a COM-Express single board computer. That is where a single board computer or an FPGA SoC can be fitted. (b) Hardware and FPGA-based Configuration Obstacles Possible Itinerary to join Real Itinerary Departure Point Arrival Point Real Itinerary Taken Shortest Distance
  • 3. 3/10 1- Overview of the system implementation: The hardware designs of both custom circuit boards, as well as the low-level flight control software, have been made available as open source and can be obtained from the PIXHAWK website. With this software, it is possible to control the MAV if its current pose (position and orientation) is known. While it is the task of the FPGA to determine the MAV’s pose using our photogrammetric algorithms, the attitude is estimated using the inertial sensors that are available on the IMU (microprocessor board). For steering the MAV, the FPGA can transmit a desired target position to the control software, which then attempts to approach it. It is thus possible to implement autonomous flight options for this MAV by letting the FPGA generate a series of desired target positions. The system design of the presented quad-rotor MAV was previously visualized in Figure 5. Our MAV would be equipped with two (2) greyscale (monochrome) USB cameras, which are operated with a resolution of 640 x 480 (VGA) at 30 Hz (frame per seconds). The cameras are mounted in a forward facing stereo configuration with a baseline of 11 cm. This requirement is fulfilled by the USB Firefly MV cameras from Point Grey®. 2- Development tools: Since the early stages of this research, we will set MATLAB as our simulation and abstraction tool. This would need occasionally to be complemented by Simulink® so that the process that we described above could be visualized in a parallel fashion. To assist us in HDL coding for imagery, several fundamental toolboxes will be installed to the MATLAB environment. The synthesis will be made on an FPGA of the Family Xilinx Zynq®-7000 All Programmable SoC (device model 7Z020). This particular SoC has been very popular and hence, several independent companies produced the development boards around the Zync-7000. The full-options Zedboard is one of these boards. It is even cheaper than the one proposed by Xilinx, and it was even let to us a special student price after confirming your status as member of a recognized academic institution, using the university email. This board comes with a licence for Xilinx Vivado® Design Edition that is locked to the device model of the Zedboard (i.e: 7Z020). This design suite will take the abstracted algorithms directly from MATLAB, and with some tuning and trial-and-error checks, it will be able to transport the assembled code to the FPGA, while providing metric statistics about power and performance that we need in our further results publications. To validate the effectiveness of our proposed solution, we will attempt few experimental missions using a commercial UAV, by setting a flight scenario where the UAV has to: (1) Take-off; (2) Find its own way to reach the destination point; then (3) Execute a safe landing. This challenge has to be completed autonomously to declare that our system has met its immediate scope. This empirical undertaking can hold true for the larger scope, namely for the other types of unmanned vehicles. Since UAVs supplant them all by possessing six (6) Degrees of Freedom (DOF) -the largest possible of any vehicle-, and by being highly manoeuvrable. This automatically gives our system retro-support for vehicles of less than six (6) DOF, except for some adjustments to be performed on the Linear-Quadratic Regulator (LQR) controller that pertains to the control of UAV’s actuators. So by fulfilling this validation benchmark, the bigger scope of driving autonomously the ground as well as underwater vehicles would be encompassed too. Objektif Objectives The prospects of autonomous vision-guided vehicles are large, and there is a real possibility to tackle the problem using the hybrid approach of Stereo Vision backed by SfM. By looking at these two factors, we have set this proposed project to aim at the following targets: (1) To examine the ways to significantly improve on the existing stereoscopic vision and structure from motion algorithms; (2) to propose new stereoscopic vision and structure from motion architecture, emulated and bundled on FPGA for trajectory estimation; and (3) to evaluate the proposed system both in the perspective of hardware implementation (in terms of area footprint, power consumption and processing speed) as well as its viability as an effective and reliable navigational system. Kajian Literatur Literature Review (a) Research Background With stereo vision we refer to all cases where the same scene is observed by two cameras at different viewing positions. Hence, each camera observes a different projection of the scene, which allows us to perform inference on the scene’s geometry. The obvious example for this mechanism is the human visual system. Our eyes are laterally displaced, which is why we observe a slightly different view of the current scene with each. This allows our brain to infer the depth of the scene in view, which is commonly referred to as stereopsis. Although it has for long been believed that we are only able to sense the scene depth for distances up to few meters, Palmisano et al. [1] recently showed that stereo vision can support our depth perception abilities even for larger distances. Using two cameras and methods from computer vision, it is possible to mimic the human ability of depth perception through stereo vision. An introduction to this field has been provided by Klette [2]. Depth perception is possible for arbitrary camera configurations, if the cameras share a sufficiently large common field of view. We assume that we have two idealized pinhole-type cameras C1 and C2 with projection centers O1 and
  • 4. 4/10 O2, as depicted in Figure 2. The distance between both projection centers is the baseline distance b. Both cameras observe the same point p, which is projected as p1 in the image plane belonging to camera C1. We are now interested in finding the point p2, which is the projection of the same point p on the image plane of camera C2. In literature, this task is known as the stereo correspondence problem, and its solution through matching p1 to possible points in the image plane of C2 is called stereo matching. Figure 2: Example of the epipolar geometry In order to implement stereo vision depth awareness on unmanned vehicles, we have first to solve this stereo matching problem, which goes back to the question of how to make the FPGA able to tell that two points on two images taken of the same scene belongs to the same scene feature. To achieve this result, we have to go through three (3) main stages. We will elaborate on each of them in the following sections, while pointing to the limitations observed and how we intend to tackle them in our proposed research. (1) Image rectification: The common approach to stereo vision includes a preliminary image rectification step, during which distortions are corrected. The resulting image after rectification should match the image received from an ideal pinhole camera. To be able to perform such a correction, we first require an accurate model of the image distortions. The distortion model that is most frequently used for this task today, is the one introduced by Brown [3]. Using Brown’s distortion model, we are able to calculate the undistorted image location (ũ, ṽ) that corresponds to the image location (u, v) in the distorted image: Existing implementations of the discussed algorithms can be found in the OpenCV library (Itseez, [4]) or the MATLAB camera calibration toolbox (Bouguet, [5]), and that is how we plan to resolve this question of image rectification. (2) Sparse vision method: Despite the groundbreaking work by [6][7][8][9][10] and [11], there is a gap regarding the speed performance of their systems. Our examinations of their work revealed that they have employed dense stereo matching methods which considers search of matching points in the entire input stereo images, thus increasing the computational load of their systems. One way to greatly speed-up stereo matching is to not process all pixel locations of the input images. While the commonly used dense approaches find a disparity label for almost all pixels in the reference image (i.e: usually the left image), sparse methods like in [12] and [13], only process a small set of salient image features. An example for the results received with a sparse compared to a dense stereo matching method can be found in Figures 3a and 3b. (a) (b) Figure 3: (a) Sparse stereo matching results received with the presented method and (b) dense results received from a belief propagation based algorithm. The color scale corresponds to the disparity in pixels. [13] The shown sparse example is precisely what we intend to apply in this research, which only finds disparity labels for a set of selected corner features. The color that is displayed for these features corresponds to the magnitude of the found disparity, with blue hues representing small and red hues representing large disparity values. The method used for the dense example is the gradient-based belief propagation algorithm that was Epipolar Plane Image Plane 1 Epipolar Line 1 p1 p2 Baseline O1 Epipoles O2 Image Plane 2 Epipolar Line 2 p …….. (1) …….. (2)
  • 5. 5/10 employed by Schauwecker and Klette [14] and Schauwecker et al. [15]. The results of this algorithm are dense disparity maps that assign a disparity label to all pixels in the left input image. Although sparse methods provide much less information than common dense approaches, this information can be sufficient for a set of applications, including UAV trajectory estimation and obstacle avoidance such as proposed here in our research. (3) Feature detection: In computer vision, a feature detector is an algorithm that selects a set of image points from a given input image. These points are chosen according to detector-specific saliency criteria. A good feature detector is expected to always select the same points when presented with images from the same scene. This should also be the case if the viewing position is changed, the camera is rotated or the illumination conditions are varied. How well a feature detector is able to redetect the same points is measured as repeatability, for which different definitions have been postulated by Schmid et al. [16]; and Gauglitz et al.[17]. Feature detectors are often used in conjunction with feature descriptors. These methods aim at providing a robust identification of the detected image features, which facilitates their recognition in case that they are re-observed. In our case, we are mainly interested in feature detection and less in feature description. A discussion of many existing methods in both fields can be found in the extensive survey published by Tuytelaars and Mikolajczyk [18]. Furthermore, a thorough evaluation of several of these methods was published by Gauglitz et al. [17]. Various existing feature detectors extract image corners. Corners serve well as image features as they can be easily identified and their position can generally be located with good accuracy. Furthermore, image corners can still be identified as such if the image is rotated, or the scale or scene illumination are changed. Hence, a reliable corner detector can provide features with high repeatability. Figure 4: (a) Input image and features form (b) Harris detector, (c) FAST and (d) SURF. [13] One less recent but still popular method for corner detection is the Harris detector (Harris and Stephens, 1988). An example for the performance of this method can be seen in Figure 4b. A computationally less expensive method for detecting image corners is the Smallest Univalue Segment Assimilating Nucleus (SUSAN) detector that was proposed by Smith and Brady [19]. (b) (c) (d) (a)
  • 6. 6/10 A more advanced method that is similar to the SUSAN detector is Features from Accelerated Segment Test (FAST), for which an example is shown in Figure 4c. One of the most influential methods in this category is the Scale Invariant Feature Transform (SIFT) by Lowe [20]. For this method, two Gaussian convolutions with different values for σ are computed for the input image. A more time-efficient blob detector that was inspired by SIFT, is Speeded-Up Robust Features (SURF) by Bay et al. [21], for which an example is shown in Figure 4d. Instead of using a DoG for detecting feature locations, Bay et al. rely on the determinant of the Hessian matrix, which is known from the Hessian-Laplace detector (Mikolajczyk and Schmid [22]). Both SIFT and SURF exhibit a very high repeatability, as it has been shown by Gauglitz et al. [17]. However, what Gauglitz et al. also have demonstrated is that both methods require significant computation time. In this research we are going to address this gap as well, by designing a slightly modified architecture of FAST corner detection algorithm. (4) Modeling the overall framework: These three key elements of our trajectory estimation system will be supplemented by other filters, snippets and SfM modules namely the Simultaneous Localization and Mapping (SLAM) as depicted in Figure 5. Figure 5: Processing pipeline of the our proposed FPGA-implementation The overall architecture will be synthesized on reconfigurable hardware, consisting of field programmable gate arrays (FPGAs) [23], [24]. These platforms promise to be adequate system building block in the building of sophisticated devices at affordable cost. They offer heavy parallelism capabilities, considerable gate counts, and comes in low-power packages [25], [26], [27], [28]. Based on the existing work limitations, this project is concerned with an efficient implementation of trajectroy estimation for automous navigation of unmanned vehicles, with a special interest on aerial ones. Figure 6 shows the anatomy of the projected overall system to be implemented. It is our aspired target to realize such architecture and put it into application in different fields of life, like in aerial imaging, shipping parcels, search & reconnaissance mission and many more. Moreover, this project will avail us of a locally-built solution that will not be bound to foreign royalties or at risk of patent infringements claims. Figure 6: System implementation of the processing at the MAV physical level (b) References [1] S. Palmisano, B. Gillam, D. G. Govan, R. S. Allison, and J. M. Harris, "Stereoscopic perception of real depths at large distances," Journal of Vision, vol. 10, no. 6, pp. 19–19, Jun. 2010. [2] R. Klette, Concise Computer Vision, 2014th ed. London: Springer. [3] D. C. Brown, "Decentering distortion of lenses," Photometric Engineering, vol. 32, no. 3, pp. 444–462, 1966. [4] Itseez, "OpenCV," 2015. [Online]. Available: http://opencv.org. Accessed: Apr. 2, 2016. [5] J. Y. Bouguet, "Camera Calibration Toolbox for MATLAB," 2013. [Online]. Available: http://vision.caltech.edu/. Accessed: Mar. 3, 2016. [6] M. Achtelik, T. Zhang, K. Kuhnlenz, and M. Buss, "Visual tracking and control of a quadcopter using a stereo camera system and inertial sensors," IEEE, 2012, pp. 2863–2869. [7] D. Pebrianti, F. Kendoul, S. Azrad, W. Wang, And K. Nonami, "Autonomous hovering and landing of a Quad-rotor micro aerial vehicle by means of on ground stereo vision system," Journal of System Design and Dynamics, vol. 4, no. 2, pp. 269–284, 2010. Feature Detection Stereo Matching Local SLAM EKF Sensor Fusion Low-Level Process High-Level Process PoseEstimation low-level flight control software Microprocessor Board FPGA SoC PID Controller Serial Link IMU Pose Attitude PIXHAWK Cheetah Greyscale Cameras Propellers of the MAV Baseline = 11cm I2 C Bus USB Port Quadrotor (4) Motors Controller
  • 7. 7/10 [8] L. R. García Carrillo, A. E. Dzul López, R. Lozano, and C. Pégard, "Combining stereo vision and inertial navigation system for a Quad- Rotor UAV," Journal of Intelligent & Robotic Systems, vol. 65, no. 1-4, pp. 373–387, Aug. 2011. [9] T. Tomic et al., "Toward a fully autonomous UAV: Research platform for indoor and outdoor urban search and rescue," IEEE Robotics & Automation Magazine, vol. 19, no. 3, pp. 46–56, Sep. 2012. [10] A. Harmat, I. Sharf, and M. Trentini, "Parallel tracking and mapping with multiple cameras on an unmanned aerial vehicle," in Intelligent Robotics and Applications. Springer Science + Business Media, 2012, pp. 421–432. [11] M. Nieuwenhuisen, D. Droeschel, J. Schneider, D. Holz, T. Läbe, and S. Behnke, "Multimodal obstacle detection and collision avoidance for micro aerial vehicles," IEEE, pp. 12–7. [12] S. Shen, Y. Mulgaonkar, N. Michael, and V. Kumar, "Vision-based state estimation for autonomous rotorcraft MAVs in complex environments," IEEE, 2010, pp. 1758–1764. [13] K. Schauwecker and A. Zell, "On-board dual-stereo-vision for the navigation of an autonomous MAV," Journal of Intelligent & Robotic Systems, vol. 74, no. 1-2, pp. 1–16, Oct. 2013. [14] K. Schauwecker and R. Klette, "A comparative study of Two vertical road Modelling techniques," in Computer Vision – ACCV 2010 Workshops. Springer Science + Business Media, 2011, pp. 174–183. [15] K. Schauwecker, S. Morales, S. Hermann, and R. Klette, "A comparative study of stereo-matching algorithms for road-modeling in the presence of windscreen wipers," IEEE, 2009, pp. 12–7. [16] C. Schmid, R. Mohr, and C. Bauckhage, International Journal of Computer Vision, vol. 37, no. 2, pp. 151–172, 2000. [17] S. Gauglitz, T. Höllerer, and M. Turk, "Evaluation of interest point detectors and feature Descriptors for visual tracking," International Journal of Computer Vision, vol. 94, no. 3, pp. 335–360, Mar. 2011. [18] T. Tuytelaars and K. Mikolajczyk, "Local invariant feature detectors: A survey," Foundations and Trends® in Computer Graphics and Vision, vol. 3, no. 3, pp. 177–280, 2007. [19] S. M. Smith and J. M. Brady, International Journal of Computer Vision, vol. 23, no. 1, pp. 45–78, 1997. [20] D. G. Lowe, "Object recognition from local scale-invariant features," vol. 2, IEEE, 1999, pp. 1150–2. [21] Bay, Herbert, Tinne Tuytelaars, and Luc Van Gool. “SURF: Speeded up Robust Features.” Lecture Notes in Computer Science. N.p.: Springer Science + Business Media, 2006. 404–417. Web. [22] Mikolajczyk, K., and C. Schmid. Indexing based on scale invariant interest points. IEEE, 2001. Web. 4 Apr. 2016. [23] P. Dang, “VLSI architecture for real-time image and video processing systems," Journal of Real-Time Image Processing, vol. 1, pp. 57– 62, 2006. [24] T. Todman, G. Constantinides, S. Wilton, O. Mencer, W. Luk, and P. Cheung, “Reconfigurable computing: architectures and design methods," Computers and Digital Techniques, IEE Proceedings, Vol. 152, No. 2, pp. 193–207, 2005. [25] A. Ahmad, B. Krill, A. Amira, and H. Rabah, “Efficient architectures for 3-D HWT using dynamic partial reconfiguration," Journal of Systems Architecture, Vol. 56, No. 8, pp. 305–316, 2010. [26] A. Ahmad, B. Krill, A. Amira, and H. Rabah, “3-D Haar wavelet transform with dynamic partial reconfiguration for 3-D medical image compression," in Biomedical Circuits and Systems Conference, 2009. BioCAS 2009. IEEE, 2009, pp. 137–140. [27] A. Ahmad and A. Amira, “Efficient reconfigurable architectures for 3-D medical image compression," in Field-Programmable Technology, 2009. FPT 2009. International Conference on, 2009, pp. 472–474. [28] B. Krill, A. Ahmad, A. Amira, and H. Rabah, “An efficient FPGA-based dynamic partial reconfiguration design flow and environment for image and signal processing IP cores," Signal Processing: Image Communication, Vol. 25, No. 5, pp. 377–387, 2010.
  • 8. 8/10 Metodologi Methodology 1. Overview In our research approach, we decided to make MATLAB our main research workhorse instead of a direct Hardware Description Language (HDL) prototyping strategy which has proven to be laborious, unreliable and rather obsolete. This choice of MATLAB remains evident after we examined and concluded that the bulk of our two computer vision techniques (i.e: Stereo Vision Depth and SfM) consist of heavy mathematical processes and algorithms. The two processes can be intertwined very efficiently in Simulink, where we can graphically abstract, visualize and reconfigure both systems chain, then handover the task to MATLAB for further research. We intend to build our prototype SoC around a Xilinx mid-range FPGA. These are known to have a better support for the MATLAB integration, and for having advanced computer vision readiness, especially with the release of the new Xilinx Design Suite renamed “Vivado”, and driven by the power of Xilinx Development Boards. 2. Project Flow Graph In a step-wise outline, we are planning the following research methodology path, as shown in Figure 7. Figure 7: Project flow graph Identification of the major components of the Safe Trajectory Estimation block Dissection: break in the main algorithms into discrete fragments (to be compatible for RTL design) Virtualization: write each fragment on MATLAB (plain code) / Simulink (GUI) Validation: verify the execution of each algorithm using MATLAB / Simulink Simulation: run a test bench for the critical situations encountered Assemble & Deploy the resulting design into one single SoC deliverable prototype Design & Synthesis: define the way to export into FPGA fabric PASS Benchmarking: perform measurements and comparisons in order to gauge the efficiency (metrics) of our SW / HW solutions No Yes
  • 9. 9/10 Hasil Penyelidikan Research Outcomes 1. Jangkaan Hasil Kajian Expected Research Outcomes The autonomous navigation of unmanned vehicles in general is the end result that this research aspires to achieve. Once it reaches its completion, this research project is expected also to lead to the following results and deliverables: (1) A full-fledged System-on-Chip for safe trajectory estimation for autonomous driving of unmanned vehicles. (2) An advanced on-board architecture to navigate unmanned vehicles without human intervention. (3) Optimized execution of Stereo Vision Depth and SfM processes on hardware platform. (4) Defining of technical limitations in the Stereo Vision Depth and SfM algorithms for hardware platforms. (5) Elaboration of novel techniques to identify surrounding objects using computer vision. (6) A description of the system taxonomy with a set of recommendations on best design practices for subsequent works. 2. Hasil Kajian Terkini Latest Research Outcomes As indicated in the methodology chart, the first key task in our research was the identification of the different components of the system and how they correlate together. This was one of the primordial works in any design flow, which required drafting the block diagram that will compose the backbone of our overall system. Based on the theory and literature experience accumulated during the first period of this research, we have came out will an all-encompassing block diagram that is workable and which exhibit high coherence between the two main modules: Stereo Vision Depth and SfM within its anatomy. This configuration in Figure 8 intertwines both core modules with the peripheral of the unmanned vehicle and with the other essential mathematical and control processes. Figure 8 shows how this block diagram has been designed. Figure 8: Block diagram of the overall design As discussed previously, we have set up two cameras in a stereo configuration with 11cm of baseline. To begin using MATLAB in the process of recovering depth from camera images, we elaborated the following code, that compute and compare two or more views of the same scene. For the experimentation purpose, we have taken two still pictures of the VASYD Lab at UTHM (Malaysia), using our camera. The pictures were of the same scene, but taken from two points of view, 11cm apart in the horizontal line, just like it would be if we had two camera mounted in a stereo configuration. The output of this experimentation is a 3-D point cloud, where each 3-D point corresponds to a pixel in one of the images. Stereo image rectification projects images onto a common image plane in such a way that the corresponding points have the same row coordinates. This process is useful for stereo vision, because the 2-D stereo correspondence problem reduces to a 1-D problem: Onboard Controller IMU Camera 1 Stereo Vision Depth Camera 2 Obstacle Avoidance EKF Data Fusion SLAM Planner LQR Controller Structure from Motion FPGA UAV
  • 10. 10/10 VASYD_left.jpg  VASYD_right.jpg  (1) Load the stereoParameters object, which is the result of calibrating the camera using either the stereoCameraCalibrator app or the estimateCameraParameters function: % Load the stereoParameters object. load('VASYDStereoParams.mat'); % Visualize camera extrinsics. showExtrinsics(stereoParams); (2) Create Video File Readers and the Video Player: Create System Objects for reading and displaying the video. videoFileLeft = 'VASYD_left.avi'; videoFileRight = 'VASYD_right.avi'; readerLeft = vision.VideoFileReader(videoFileLeft, 'VideoOutputDataType', 'uint8'); readerRight = vision.VideoFileReader(videoFileRight, 'VideoOutputDataType', 'uint8'); player = vision.DeployableVideoPlayer('Location', [20, 400]); (3) Read and Rectify Video Frames: The frames from the left and the right cameras must be rectified in order to compute disparity and reconstruct the 3-D scene. Rectified images have horizontal epipolar lines, and are row-aligned. This simplifies the computation of disparity by reducing the search space for matching points to one dimension. Rectified images can also be combined into an anaglyph, which can be viewed using the stereo red-cyan glasses to see the 3-D effect. frameLeft = readerLeft.step(); frameRight = readerRight.step(); [frameLeftRect, frameRightRect] = rectifyStereoImages(frameLeft, frameRight, stereoParams); figure; imshow(stereoAnaglyph(frameLeftRect, frameRightRect)); title('Rectified Video Frames'); (4) Compute Disparity: In rectified stereo images any pair of corresponding points are located on the same pixel row. For each pixel in the left image compute the distance to the corresponding pixel in the right image. This distance is called the disparity, and it is proportional to the distance of the corresponding world point from the camera. frameLeftGray = rgb2gray(frameLeftRect); frameRightGray = rgb2gray(frameRightRect); disparityMap = disparity(frameLeftGray, frameRightGray); figure; imshow(disparityMap, [0, 64]); title('Disparity Map'); colormap jet colorbar (5) Reconstruct the 3-D Scene: Reconstruct the 3-D world coordinates of points corresponding to each pixel from the disparity map. points3D = reconstructScene(disparityMap, stereoParams); % Convert to meters and create a pointCloud object points3D = points3D ./ 1000; ptCloud = pointCloud(points3D, 'Color', frameLeftRect); % Create a streaming point cloud viewer player3D = pcplayer([-3, 3], [-3, 3], [0, 8], 'VerticalAxis', 'y', 'VerticalAxisDir', 'down'); % Visualize the point cloud view(player3D, ptCloud); (6) Process the Rest of the Video: Apply the steps described above to in every frame of the video, when we will be using a frame grabber which is not the case in this experimentation.
  • 11. 11/10 while ~isDone(readerLeft) && ~isDone(readerRight) % Read the frames. frameLeft = readerLeft.step(); frameRight = readerRight.step(); % Rectify the frames. [frameLeftRect, frameRightRect] = rectifyStereoImages(frameLeft, frameRight, stereoParams); % Convert to grayscale. frameLeftGray = rgb2gray(frameLeftRect); frameRightGray = rgb2gray(frameRightRect); % Compute disparity. disparityMap = disparity(frameLeftGray, frameRightGray); % Reconstruct 3-D scene. points3D = reconstructScene(disparityMap, stereoParams); points3D = points3D ./ 1000; ptCloud = pointCloud(points3D, 'Color', frameLeftRect); view(player3D, ptCloud); % Display the frame. step(player, dispFrame); end % Clean up. reset(readerLeft); reset(readerRight); release(player); Kemajuan Penyelidikan Research Progress BAB (CHAPTER) PERATUS SIAP (PERCENTAGE OF COMPLETION) CATATAN (REMARKS) INTRODUCTION LITERATURE REVIEW RESEARCH METHODOLOGY RESULTS & ANALYSIS CONCLUSION 50 % 70 % 40 % 20 % 10 % The final thesis introduction will take most of its source from this original introduction, plus some final editing. The literature review is an ongoing process, and we expect this chapter to be edited frequently as with every advent around photogrammetry. Our methodology flowchart has been set up, and the algorithms conception and virtualization did start, in parallel with their evaluation. The result & analysis advancement so far are in the form of block diagrams identification and few MATLAB experiments. Nota: Sila masukkan lampiran jika ruangan tidak mencukupi Note: Please use an attachment if the provided space is not enough Masalah Yang Memberi Kesan Kepada Kemajuan Penyelidikan Dan Langkah Yang Diambil Bagi Mengatasi Masalah Tersebut. Problems That Affect Research Progress And Remedial Actions Taken To Resolve It The research has just rolled out, but this part is by far the most decisive because here we will define our direction for the next year. The problem in that sens, was to come to a decision about the right approach to use for our safe trajectory estimation project. The whole project can fall apart if we don’t pick carefully the techniques and components to be used, and if we fall to over-ambitious and wishful thinking. To be sure we were on the right path, we have reviewed tens of similar projects done around the world, and determined where they have stalled in the design process, and what errors have they committed, so that we don’t make them in ours. Their recommendations regarding the importance of the stereo image stability, offline localization and power consumption was deterministic for us to choose the Structure from Motion technique as a way to mitigate the irregularities that may be handed by Stereo Vision Depth technique when working alone. For the issue of power consumption, we were tempted to try a novel configuration of Compressed Sensing (CS) using Orthogonal Matching Pursuit which researchers have successfully integrated on FPGA. A trivial problem however was on the technicalities of my MATLAB version, as I had to find and download a handful of missing libraries and a particular toolbox that is essential for the Stereo Vision Depth technique. I also had to make sure there was no litigant matter regarding the patenting right of the authors of those libraries, because it may cause royalties paying or compensations if we come to use them in our FPGA prototype without consent of their intellectual ownership body.
  • 12. 12/10 The same problem was taken care of, when it came to the standard stereo images to use to calibrate or benchmark our Stereo Vision Depth algorithm. These images are recognized by the computer vision community of researchers as ideal to test, gauge and compare the Stereo Vision modules, but their usage should be free and legal before we can use them in our test bench in future. The problem of finding the right baseline between the two cameras was also tackled. We had to resort to a nuts-and-bolt reasoning of the stereo camera configuration, and tune the distance until we get an acceptable and close to accurate distance determined between the camera plane and the objects being investigated on the scene. Putting the MATLAB code together was another ordeal, because of the many issues pertaining to mathematical multiplications of matrix versus vectors. This goes back to a 3-D or a 2-D multiplication. It was primordial to know which multiplication is which, in order to obtain the valid result, and not be tricked by a wrong result that would seem correct as well. BAHAGIAN D PART D AKTIVITI PELAJAR STUDENT ACTIVITIES Pembentangan Kertas Kerja, Menghadiri Seminar, dll. Papers presented, seminar attended, etc. 1. FKEE Hari Transformasi Minda: - Poster Presentation: With a poster titled "Simulation & Analysis of Different DCT Techniques for Image Compression on MATLAB" 2. Publisher's Talk: Research Best Practices (Dr Wong Woei Fuh) @UTM Skudai 3. Chairman Lecture Series : "The Importance of Practical Engineer In The Industry" @Sultan Ibrahim Banquet Hall, DSI, UTHM 4. Malaysian Technical Universities Conference on Engineering & Technology 2015 (MUCET2015) @KSL JB (for attending the keynote speeches) 5. Short course : Health Monitoring of Civil Structure @UTM Skudai (Faculty of Biomedical) 6. Making HR Technology Relevant to Your Organization @Thistle Hotel JB 7. SolidWorks Innovation Day @Malacca (full-day training) 8. WIEF 2015 (11th World Islamic Economic Forum) - as a delegate representing Morocco @KLCC 9. 2nd IdeaPad (side event of WIEF 2015) - as a presenter of my PowerKasut non-credit project @KLCC 10. Impact & Insights Dialogue: Making an Impact on Education by Hong Leong Foundation @KL 11. 1 AES (ASEAN Entrepreneurship Summit 2015) - in conjunction with ASEAN Summit in Malaysia @KL 12. Social Entrepreneurship Bootcamp by MaGIC (a side event of 1AES) - 3-days workshop where we transform a social idea into business model. 13. CEO Faculty Programme - A talk by Dato' Wei Chuan Beng, Founder of RedONE : The Journey to Entrepreneurship @UTHM 14. Week of LabVIEW Webcast Series (5 sessions of 30 mins each) @National Instruments ASEAN 15. Talk by Prof Simone Hochgrab (Cambridge Univ.): Advances in reacting flow measurements @UTM Skudai 16. Workshop: Characteristic of A Good Literature Review by Prof Abu Bakar bin Mohammad @UTM Skudai (FKE) 17. UTHM Chairman Lecture Series: Building Info Modelling in Facilities Management Perspectives (By Director of Microcorp Technology Sdn Bhd) 18. Transformasi Minda Mahasiswa: Course on Design Technique for 3-D Printing @UTHM (FKEE) 19. How to Finish your Master or PhD Without Correction @Seminar Room, ORICC, UTHM 20. 2016 Offshore Technology Conference Asia (OTC Asia) @KLCC (as a visitor) 21. Wacana Ilmu Siri 1: Understanding Scopus, Google Scholar And ISI Web Of Science @UTHM 22. Seminar Pemeriksa Luar FKEE: Technopreneurship - From Student Project to Startup to Public Listed Company (by Prof Ahmad Fadzli Hani) 23. 11th ITU Symposium on ICT, Environment and Climate Change @Renaissance Hotel Kuala Lumpur 24. Datacloud South East Asia Forum @Zon Regency Hotel, JB 25. Talk by Ir. Shaik Abdul Wahab bin Dato Hj Rahim Director of GEA Sdn Bhd: Site Investigation @UTHM 26. International Seminar on Power and Communication Engineering (iSPACE2016) by FKEE @UTHM 27. 1st FKEE PG Research Conference (1st FKEE PG ResConf) by FKEE @UTHM (Presenting an Article, Poster, and an Oral Presentation)
  • 13. 13/10 Kegiatan Bukan Akademik : Non-Academic Activities 1. Youth Trailblazers Challenge 2015 @UTM Skudai 2. ALIC (Arabic Language Intensive Course) - I teach Arabic language basics to a class for a 6-hours day class @UTM Skudai (Faculty of Islamic Civilization) 3. Malaysians United Run 2015 - Anjuran Institut Onn Jaafar (IOJ) @KL 4. Kolej Kediaman Perwira's Festival Keamanan: - Bengkel Bahasa Perancis - I teach French basics for a 2-hours night class 5. ICMF 2015 (International Cultural Mega Festival) - as an organizing commitee member @UTM Skudai 6. Kolej Kediaman Perwira's Aktiviti "Gembur Kasih 5.0" @KK Perwira Taman Simbiosis 7. UTHM International Cultural Evening @Sultan Ibrahim Banquet Hall, DSI, UTHM 8. FKEE Jalinan Muafakat 2.0 @Padang B (Padang Ragbi), UTHM 9. Temasya Sukan Perwira @UTHM Stadium 10. UTHM Radio: Invited 3 times to talk (3 hours slots each): - Sharing tips on how to improve English, and my experience being abroad. Penganugerahan / Penghargaan : Recognitions / Awards 1. FKEE Hari Transformasi Minda: - 2 Minutes Idea: Winner of both 1st & 2nd Place. 2. Kolej Kediaman Perwira's Festival Keamanan: - Larian Keamanan - I arrived in 4th position in this cross country run around KK Perwira vicinities. 3. 3 Minutes Thesis Competition 2016: - Winner of 1st Place (Master Students Category) 4. Pidato Antarabangsa Bahasa Melayu Piala Perdana Menteri (PABM) in Putrajaya: - Top 15 in Malaysia (International Students Category) 5. 2nd Kazan OIC Entrepreneurship Forum (International Competition) in Kazan, Republic of Tatarstan, Russia: - Selected to pitch in front of the president of the Republic of Tatarstan.