1) The document is a progress report for a research project on implementing stereo vision and structure from motion algorithms on an FPGA to enable autonomous navigation of unmanned vehicles without human control.
2) The goal is to use stereo vision depth to estimate the vehicle's trajectory and detect obstacles, and to incorporate structure from motion to continuously adapt the trajectory and ensure the vehicle reaches its destination.
3) The proposed system would be tested on a quadcopter drone equipped with cameras and an FPGA, with the aim of autonomously taking off, navigating to a destination while avoiding obstacles, and landing, to demonstrate autonomous navigation capabilities.
European Rotors - Mission Management System’s Capabilities for Law Enforcemen...Leonardo
Leonardo attended at European Rotors the Police Aviation Conference illustrating its Mission Management System’s capabilities for Law Enforcement Operators
Re usable continuous-time analog sva assertionsRégis SANTONJA
This paper shows how SystemVerilog Assertions (SVA) modules can be bound to analog IP blocks, shall they be at behavioral or transistor-level, enabling the assertions to become a true IP deliverable that can be reused at SoC level. It also highlights how DPIs can fix analog assertions specificities, such as getting rid of hierarchical paths, especially when probing currents. This paper also demonstrates how to flawlessly switch models between digital (wreal) and analog models without breaking the assertions. Finally, it demonstrates how one can generate an adaptive clock to continuously assert analog properties whose stability over time is critical, such as current or voltage references or supplies.
Abhay tank is futuristic infantry combat vehicle (ICV) developed by DRDO, India as a technology demonstrator for replacement of BMP-II vehicle.It led to development of FICV in Public-Private Partnership mode
http://www.drdo.gov.in/drdo/English/index.jsp?pg=Abhay.jsp
European Rotors - Mission Management System’s Capabilities for Law Enforcemen...Leonardo
Leonardo attended at European Rotors the Police Aviation Conference illustrating its Mission Management System’s capabilities for Law Enforcement Operators
Re usable continuous-time analog sva assertionsRégis SANTONJA
This paper shows how SystemVerilog Assertions (SVA) modules can be bound to analog IP blocks, shall they be at behavioral or transistor-level, enabling the assertions to become a true IP deliverable that can be reused at SoC level. It also highlights how DPIs can fix analog assertions specificities, such as getting rid of hierarchical paths, especially when probing currents. This paper also demonstrates how to flawlessly switch models between digital (wreal) and analog models without breaking the assertions. Finally, it demonstrates how one can generate an adaptive clock to continuously assert analog properties whose stability over time is critical, such as current or voltage references or supplies.
Abhay tank is futuristic infantry combat vehicle (ICV) developed by DRDO, India as a technology demonstrator for replacement of BMP-II vehicle.It led to development of FICV in Public-Private Partnership mode
http://www.drdo.gov.in/drdo/English/index.jsp?pg=Abhay.jsp
The aim of “BLUESLEMON” project is to develop a low-cost automatic system for monitoring landslide surface displacement using drones and BT beacons. The proposed drone architecture is developed to go beyond the current state-of-the-art techniques and is characterized by autonomous navigation capabilities. The UAV platform is equipped with obstacle-detection sensors and collision-avoidance algorithms, allowing the smart UAS to be easily employed for autonomous navigation, even in case of diverse environments or applications (search-and-rescue operations in alpine environments or automatic surveillance in urban areas).
Towards An Open Instrumentation Platform: Getting The Most From MAVLink, Ardu...Steve Arnold
What is a drone? What is an autopilot, and just what is an IMU and a Kalman filter? This presentation describes an open source hardware and software architecture defined by the Ardupilot firmware, the MAVLink message protocol, several layers of user-space software, and various supported hardware devices and peripherals. It will also cover the current capabilities and components of the core software stacks, as well as extended support for different hardware platforms and sensors, computer vision processing, cameras and image tags, as well as specific science applications and related FOSS projects currently underway. The two highlighted projects both suggest more non-traditional (and less mobile) data acquisition applications using these tools; for more typical UAV applications, airframe options and alternative firmware will also be discussed.
We develop custom Image Recognition systems for Aerospace and defence applications. Using algorithms like Deep Convolutional Neural Networks and Regional Convolutional Neural Networks.
Our algorithms for Target Recognition and Tracking are designed from the beginning to be run on embedded systems. We target both GPU and FPGA devices.
To Train and Validate our algorithms we developed a process to generate photorealistic 3D environments.
Those 3D Environments are used to produce realistic video streams of the targets in different environmental conditions (lighting, adverse meteorological conditions, camouflage, point-of-view).
The same technology can be used to Train and Test Automotive Vision Systems.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Vlsics040307DESIGN AND IMPLEMENTATION OF CAR PARKING SYSTEM ON FPGAVLSICS Design
As, the number of vehicles are increased day by day in rapid manner. It causes the problem of traffic
congestion, pollution (noise and air). To overcome this problem A FPGA based parking system has been
proposed. In this paper, parking system is implemented using Finite State Machine modelling. The system
has two main modules i.e. identification module and slot checking module. Identification module
identifies the visitor. Slot checking module checks the slot status. These modules are modelled in HDL
and implemented on FPGA. A prototype of parking system is designed with various interfaces like sensor
interfacing, stepper motor and LCD.
Sonar based obstacle avoidance for UAVsgaurav dhir
Basic Obstacle Avoidance Capability using SONAR sensor was incorporated within an Unmanned Aerial Vehicle. The SONAR sensor was mounted on Raspberry Pi and the system was interfaced with the Pixhawk Flight Controller. MAVLINK protocol was used to ensure communication between Pixhawk and RPI allowing Pixhawk to take necessary control action.
OSGi: Best Tool In Your Embedded Systems ToolboxBrett Hackleman
We discuss several of our past and current OSGi-based solutions for defense systems, mining equipment, construction equipment, industrial automation, and automotive/telematics domains. We present some best practices for building flexible, cross-platform, high-performance embedded application and the resulting lessons learned along the way. We demonstrate how the Eclipse Runtime Components and Frameworks can be used to access communication buses such as CAN, J1939, J1850, and MIL-STD-1553. Finally, we explain how using OSGi and Equinox can simplify the development, testing, and deployment of your next application, whether embedded or not.
Abstract— The movie making is a multibillion-dollar industry. In 2018, the global movie business has generated nearly $41.5 billion in box office and more than that in merchandise revenues. But it is not a guaranteed business: every year we witness big buster and budget movies that become either a “hit” or a “flop”. The success of a movie is mainly judged by looking at ratio of its gross revenue over its budget, but some may also call a movie successful if it bagged critics praise and awards, both of which do not necessarily convert to financial revenue. In our project we look from an investor point of view, who largely favour financial return over any other attribute. But to predict the success of a movie, an investor can’t only rely on superficial attributes, a typical reason why Machine Learning (ML) prediction will prove to be very useful. We are going to implement this prediction using two ML methods that we have studied during the subject CMPE542, namely Random Forest and Neural Network. These are very adapted for discriminating classes, and can thus help us very effectively in pointing to successful or failed movies after being trained on a set of 5043 movies which data have been scraped from IMDB. At the end of the project, we should be able to know which method has the highest accuracy, what movies sell the best at the box office and most importantly for movies producers, what movie features are the most decisive in making a movie profitable.
The aim of “BLUESLEMON” project is to develop a low-cost automatic system for monitoring landslide surface displacement using drones and BT beacons. The proposed drone architecture is developed to go beyond the current state-of-the-art techniques and is characterized by autonomous navigation capabilities. The UAV platform is equipped with obstacle-detection sensors and collision-avoidance algorithms, allowing the smart UAS to be easily employed for autonomous navigation, even in case of diverse environments or applications (search-and-rescue operations in alpine environments or automatic surveillance in urban areas).
Towards An Open Instrumentation Platform: Getting The Most From MAVLink, Ardu...Steve Arnold
What is a drone? What is an autopilot, and just what is an IMU and a Kalman filter? This presentation describes an open source hardware and software architecture defined by the Ardupilot firmware, the MAVLink message protocol, several layers of user-space software, and various supported hardware devices and peripherals. It will also cover the current capabilities and components of the core software stacks, as well as extended support for different hardware platforms and sensors, computer vision processing, cameras and image tags, as well as specific science applications and related FOSS projects currently underway. The two highlighted projects both suggest more non-traditional (and less mobile) data acquisition applications using these tools; for more typical UAV applications, airframe options and alternative firmware will also be discussed.
We develop custom Image Recognition systems for Aerospace and defence applications. Using algorithms like Deep Convolutional Neural Networks and Regional Convolutional Neural Networks.
Our algorithms for Target Recognition and Tracking are designed from the beginning to be run on embedded systems. We target both GPU and FPGA devices.
To Train and Validate our algorithms we developed a process to generate photorealistic 3D environments.
Those 3D Environments are used to produce realistic video streams of the targets in different environmental conditions (lighting, adverse meteorological conditions, camouflage, point-of-view).
The same technology can be used to Train and Test Automotive Vision Systems.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Vlsics040307DESIGN AND IMPLEMENTATION OF CAR PARKING SYSTEM ON FPGAVLSICS Design
As, the number of vehicles are increased day by day in rapid manner. It causes the problem of traffic
congestion, pollution (noise and air). To overcome this problem A FPGA based parking system has been
proposed. In this paper, parking system is implemented using Finite State Machine modelling. The system
has two main modules i.e. identification module and slot checking module. Identification module
identifies the visitor. Slot checking module checks the slot status. These modules are modelled in HDL
and implemented on FPGA. A prototype of parking system is designed with various interfaces like sensor
interfacing, stepper motor and LCD.
Sonar based obstacle avoidance for UAVsgaurav dhir
Basic Obstacle Avoidance Capability using SONAR sensor was incorporated within an Unmanned Aerial Vehicle. The SONAR sensor was mounted on Raspberry Pi and the system was interfaced with the Pixhawk Flight Controller. MAVLINK protocol was used to ensure communication between Pixhawk and RPI allowing Pixhawk to take necessary control action.
OSGi: Best Tool In Your Embedded Systems ToolboxBrett Hackleman
We discuss several of our past and current OSGi-based solutions for defense systems, mining equipment, construction equipment, industrial automation, and automotive/telematics domains. We present some best practices for building flexible, cross-platform, high-performance embedded application and the resulting lessons learned along the way. We demonstrate how the Eclipse Runtime Components and Frameworks can be used to access communication buses such as CAN, J1939, J1850, and MIL-STD-1553. Finally, we explain how using OSGi and Equinox can simplify the development, testing, and deployment of your next application, whether embedded or not.
Abstract— The movie making is a multibillion-dollar industry. In 2018, the global movie business has generated nearly $41.5 billion in box office and more than that in merchandise revenues. But it is not a guaranteed business: every year we witness big buster and budget movies that become either a “hit” or a “flop”. The success of a movie is mainly judged by looking at ratio of its gross revenue over its budget, but some may also call a movie successful if it bagged critics praise and awards, both of which do not necessarily convert to financial revenue. In our project we look from an investor point of view, who largely favour financial return over any other attribute. But to predict the success of a movie, an investor can’t only rely on superficial attributes, a typical reason why Machine Learning (ML) prediction will prove to be very useful. We are going to implement this prediction using two ML methods that we have studied during the subject CMPE542, namely Random Forest and Neural Network. These are very adapted for discriminating classes, and can thus help us very effectively in pointing to successful or failed movies after being trained on a set of 5043 movies which data have been scraped from IMDB. At the end of the project, we should be able to know which method has the highest accuracy, what movies sell the best at the box office and most importantly for movies producers, what movie features are the most decisive in making a movie profitable.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Gen AI Study Jams _ For the GDSC Leads in India.pdf
6 [progress report] for this leisurely side-project I was doing in 2016
1. 1/10
BAHAGIAN A
PART A
PROJECT PROGRESS REPORT
Tarikh Pendaftaran Pertama
First Date of Registration
8 September 2015
Tarikh Tamat
Date of Conclusion
25 January 2017
BAHAGIAN B
PART B
MAKLUMAT KURSUS
COURSE INFORMATION
Jumlah Pengecualian Kredit / Kursus Yang Diluluskan Oleh Universiti (jika berkaitan) :
Total credit exemption / courses approved by the University (If related)
[none]
Senarai Kursus Yang Didaftar Semester Ini
List of Courses Registered for This Semester
Nama Kursus
Courses
1) PENYELIDIKAN
2) KAEDAH PENYELIDIKAN
Kod
Code
KEE10100
KEE10200
Kredit
Credit
0
0
BAHAGIAN C
PART C
MAKLUMAT PENYELIDIKAN
RESEARCH INFORMATION
Tajuk
Title
FPGA-based Implementation of Safe Trajectory Estimation for Unmanned Vehicles using Photogrammetry Navigation Techniques
Latar Belakang
Background
In the last five years, we witnessed a worldwide emergence of a new class of robots: the unmanned vehicles, from drones (i.e: Unmanned Aerial
Vehicles - UAVs), remotely operated underwater vehicles (ROVs), inflatable dirigibles balloons (airships), to driverless cars and all other amateur
and hobbyists gadgets in between, like the popularized quadcopters.
They are now more affordable than anytime before, and are slowly but surely populating our roads, waters and skies, to the point they will become
ubiquitous elements in the traffic and logistics. But till today, these vehicles are only being guided remotely by on-ground human operators who are
prone to error and time-conscious.
In this study, we want to take the application of unmanned vehicles to a new level, by sparing them the need for a human operator, and making
fully autonomous. This will be possible by harnessing the power of two computer vision methods that are essential parts in photogrammetry
technology: stereo vision depth and structure from motion (SfM). Our contribution will allow the unmanned vehicle to be auto-aware of the dangers
and obstructions that will cross its way without any human intervention.
Penyataan Masalah
Problem Statements
The main issue that is daunting engineers is to how to guide these robots in unrecognizable zones, while saving working hours and stress for
ground-operators. For UAVs, the common practice is to drive the unmanned vehicle from distance, where an operator will have a bird's-eye view
thought the camera(s) fixed on the unmanned vehicle. But this solution makes the UAV dependent of the human side, who can be prone to error,
misjudgement, incompetent and who does not have a full grasp of the actual flight or driving condition. Besides that, the human-operated UAVs
have no autonomy of decision, and thus are not able to quickly react and counter abrupt events, which occur very often during navigation.
Besides that, these teleguided unmanned vehicles are exposed to some serious technical limitations and threats:
(a) They can be stranded and cut-off from communication in GPS-denied areas like tunnels, undergrounds, or when encircled by concrete walls;
and
(b) In the case of a security breach to their transmission protocol, their communication with ground control can be compromised by malicious signal
hijacking, or also by non-deliberate signal interferences.
The proposed solution for this problematic of navigation autonomy resides in the design of an intelligent trajectory estimation system. This system
will predict and trace a safe itinerary (i.e: planned path) clear from all sorts of obstacles. The unmanned vehicle will then only have to follow the
coordinates of this itinerary until reaching its given destination. This itinerary must be continuously auto-adapting itself with the real-time change of
environment, traffic movement, and the occurrence of unexpected obstacles, especially the moving ones, as in Figure 1.
2. 2/10
Figure 1: A representation of the proposed UAV obstacle-aware system being put in the field
In Figure 1, it is clear that the blue trajectory is the shortest, but it crosses many obstacles. The FPGA fed by images from the two cameras on-
board, will determine the red itinerary intelligently, so to closely and safely get around these obstacles, without making the new trajectory too time
and resource exhausting, especially for power-conscious vehicles like in this example of the Micro Aerial Vehicle (MAV).
A highly promising method to make such trajectory possible is the Stereo Vision Depth, which will procure the unmanned vehicle the ability to
appraise the relative distance (i.e: depth map) that separates it from the different objects surrounding it. Then we would write an FPGA routine that
continuously refers to that depth map in order to determine where the unmanned vehicle is located with respect to surrounding objects. It then
figures out what obstacles are in its path and compute how much it has to steer to avoid them.
But the exclusive novelty that we are bringing in our research is the incorporation of a unique technique commonly termed Structure from Motion
commonly abbreviated as “SfM”. This set of computer vision algorithms constructs 3-D models of landscapes with considerable accuracy by only
analyzing a single 2-D view of the scene in motion. The primary function of SfM in our proposed system is to execute the Simultaneous Localization
and Mapping (SLAM) that will ensure our unmanned vehicle is always on track on its safe trajectory heading toward its predetermined destination,
and at the same time suggests the optimal route to reach that destination.
Matlamat
Aim
So our solution’s aim is to make these two methods (i.e: stereo vision depth and SfM) coexist on the same System-on-Chip (SoC), and to apply
them in real autonomous navigation situation (i.e: test them in the field), so that one method can mitigate the error that the other could have
propagated, in the manner of a hybrid system. This implies building a field programmable gate array (FPGA) prototype and mounting it on a
binocular mid-range drone for validation and deployment. MATLAB has been selected as our abstraction tool, and Vivado HLS will be the
middleware between the HDL and the FPGA. It is expected that this research will yield an FPGA design that is capable of piloting an unmanned
vehicle by sensing surrounding dangers toward its path.
Skop
Scope
(a) Range of Hardware Used
The specific MAV model we will use is based on the PIXHAWK Cheetah that was developed at the ETH Zürich.
The PIXHAWK Cheetah is a Quadcopter (4 rotors). On-board this MAV we can find:
1- A custom-designed microprocessor board that serves as:
a- An Inertial Measurement Unit (IMU), and
b- A low-level flight controller, for steering the MAV to the desired target waypoint. The low-level flight control software is built around
an unmodified PID controller provided by the PIXHAWK project, and which consists of separate pose and attitude controllers.
2- A custom-made carrier board for a COM-Express single board computer. That is where a single board computer or an FPGA SoC can be
fitted.
(b) Hardware and FPGA-based Configuration
Obstacles
Possible
Itinerary to join
Real Itinerary
Departure Point
Arrival Point
Real Itinerary Taken
Shortest Distance
3. 3/10
1- Overview of the system implementation:
The hardware designs of both custom circuit boards, as well as the low-level flight control software, have been made available as open source
and can be obtained from the PIXHAWK website.
With this software, it is possible to control the MAV if its current pose (position and orientation) is known. While it is the task of the FPGA to
determine the MAV’s pose using our photogrammetric algorithms, the attitude is estimated using the inertial sensors that are available on the
IMU (microprocessor board). For steering the MAV, the FPGA can transmit a desired target position to the control software, which then
attempts to approach it. It is thus possible to implement autonomous flight options for this MAV by letting the FPGA generate a series of
desired target positions. The system design of the presented quad-rotor MAV was previously visualized in Figure 5.
Our MAV would be equipped with two (2) greyscale (monochrome) USB cameras, which are operated with a resolution of 640 x 480 (VGA)
at 30 Hz (frame per seconds).
The cameras are mounted in a forward facing stereo configuration with a baseline of 11 cm. This requirement is fulfilled by the USB Firefly MV
cameras from Point Grey®.
2- Development tools:
Since the early stages of this research, we will set MATLAB as our simulation and abstraction tool. This would need occasionally to be
complemented by Simulink® so that the process that we described above could be visualized in a parallel fashion.
To assist us in HDL coding for imagery, several fundamental toolboxes will be installed to the MATLAB environment.
The synthesis will be made on an FPGA of the Family Xilinx Zynq®-7000 All Programmable SoC (device model 7Z020). This particular SoC
has been very popular and hence, several independent companies produced the development boards around the Zync-7000. The full-options
Zedboard is one of these boards. It is even cheaper than the one proposed by Xilinx, and it was even let to us a special student price after
confirming your status as member of a recognized academic institution, using the university email.
This board comes with a licence for Xilinx Vivado® Design Edition that is locked to the device model of the Zedboard (i.e: 7Z020). This design
suite will take the abstracted algorithms directly from MATLAB, and with some tuning and trial-and-error checks, it will be able to transport the
assembled code to the FPGA, while providing metric statistics about power and performance that we need in our further results publications.
To validate the effectiveness of our proposed solution, we will attempt few experimental missions using a commercial UAV, by setting a flight
scenario where the UAV has to:
(1) Take-off;
(2) Find its own way to reach the destination point; then
(3) Execute a safe landing.
This challenge has to be completed autonomously to declare that our system has met its immediate scope.
This empirical undertaking can hold true for the larger scope, namely for the other types of unmanned vehicles. Since UAVs supplant them all by
possessing six (6) Degrees of Freedom (DOF) -the largest possible of any vehicle-, and by being highly manoeuvrable. This automatically gives our
system retro-support for vehicles of less than six (6) DOF, except for some adjustments to be performed on the Linear-Quadratic Regulator (LQR)
controller that pertains to the control of UAV’s actuators.
So by fulfilling this validation benchmark, the bigger scope of driving autonomously the ground as well as underwater vehicles would be
encompassed too.
Objektif
Objectives
The prospects of autonomous vision-guided vehicles are large, and there is a real possibility to tackle the problem using the hybrid approach of
Stereo Vision backed by SfM. By looking at these two factors, we have set this proposed project to aim at the following targets:
(1) To examine the ways to significantly improve on the existing stereoscopic vision and structure from motion algorithms;
(2) to propose new stereoscopic vision and structure from motion architecture, emulated and bundled on FPGA for trajectory estimation;
and
(3) to evaluate the proposed system both in the perspective of hardware implementation (in terms of area footprint, power consumption and
processing speed) as well as its viability as an effective and reliable navigational system.
Kajian Literatur
Literature Review
(a) Research Background
With stereo vision we refer to all cases where the same scene is observed by two cameras at different viewing positions. Hence, each camera
observes a different projection of the scene, which allows us to perform inference on the scene’s geometry. The obvious example for this
mechanism is the human visual system. Our eyes are laterally displaced, which is why we observe a slightly different view of the current scene with
each. This allows our brain to infer the depth of the scene in view, which is commonly referred to as stereopsis. Although it has for long been
believed that we are only able to sense the scene depth for distances up to few meters, Palmisano et al. [1] recently showed that stereo vision can
support our depth perception abilities even for larger distances.
Using two cameras and methods from computer vision, it is possible to mimic the human ability of depth perception through stereo vision. An
introduction to this field has been provided by Klette [2]. Depth perception is possible for arbitrary camera configurations, if the cameras share a
sufficiently large common field of view. We assume that we have two idealized pinhole-type cameras C1 and C2 with projection centers O1 and
4. 4/10
O2, as depicted in Figure 2. The distance between both projection centers is the baseline distance b. Both cameras observe the same point p,
which is projected as p1 in the image plane belonging to camera C1. We are now interested in finding the point p2, which is the projection of the
same point p on the image plane of camera C2. In literature, this task is known as the stereo correspondence problem, and its solution through
matching p1 to possible points in the image plane of C2 is called stereo matching.
Figure 2: Example of the epipolar geometry
In order to implement stereo vision depth awareness on unmanned vehicles, we have first to solve this stereo matching problem, which goes back
to the question of how to make the FPGA able to tell that two points on two images taken of the same scene belongs to the same scene feature.
To achieve this result, we have to go through three (3) main stages. We will elaborate on each of them in the following sections, while pointing to
the limitations observed and how we intend to tackle them in our proposed research.
(1) Image rectification:
The common approach to stereo vision includes a preliminary image rectification step, during which distortions are corrected. The resulting image
after rectification should match the image received from an ideal pinhole camera. To be able to perform such a correction, we first require an
accurate model of the image distortions. The distortion model that is most frequently used for this task today, is the one introduced by Brown [3].
Using Brown’s distortion model, we are able to calculate the undistorted image location (ũ, ṽ) that corresponds to the image location (u, v) in the
distorted image:
Existing implementations of the discussed algorithms can be found in the OpenCV library (Itseez, [4]) or the MATLAB camera calibration toolbox
(Bouguet, [5]), and that is how we plan to resolve this question of image rectification.
(2) Sparse vision method:
Despite the groundbreaking work by [6][7][8][9][10] and [11], there is a gap regarding the speed performance of their systems. Our examinations of
their work revealed that they have employed dense stereo matching methods which considers search of matching points in the entire input stereo
images, thus increasing the computational load of their systems. One way to greatly speed-up stereo matching is to not process all pixel locations
of the input images. While the commonly used dense approaches find a disparity label for almost all pixels in the reference image (i.e: usually the
left image), sparse methods like in [12] and [13], only process a small set of salient image features. An example for the results received with a
sparse compared to a dense stereo matching method can be found in Figures 3a and 3b.
(a) (b)
Figure 3: (a) Sparse stereo matching results received with the presented method and (b) dense results received from a belief propagation based
algorithm. The color scale corresponds to the disparity in pixels. [13]
The shown sparse example is precisely what we intend to apply in this research, which only finds disparity labels for a set of selected corner
features. The color that is displayed for these features corresponds to the magnitude of the found disparity, with blue hues representing small and
red hues representing large disparity values. The method used for the dense example is the gradient-based belief propagation algorithm that was
Epipolar Plane
Image
Plane 1
Epipolar
Line 1
p1
p2
Baseline
O1
Epipoles
O2
Image
Plane 2
Epipolar
Line 2
p
…….. (1)
…….. (2)
5. 5/10
employed by Schauwecker and Klette [14] and Schauwecker et al. [15]. The results of this algorithm are dense disparity maps that assign a
disparity label to all pixels in the left input image.
Although sparse methods provide much less information than common dense approaches, this information can be sufficient for a set of
applications, including UAV trajectory estimation and obstacle avoidance such as proposed here in our research.
(3) Feature detection:
In computer vision, a feature detector is an algorithm that selects a set of image points from a given input image. These points are chosen
according to detector-specific saliency criteria. A good feature detector is expected to always select the same points when presented with images
from the same scene. This should also be the case if the viewing position is changed, the camera is rotated or the illumination conditions are
varied. How well a feature detector is able to redetect the same points is measured as repeatability, for which different definitions have been
postulated by Schmid et al. [16]; and Gauglitz et al.[17].
Feature detectors are often used in conjunction with feature descriptors. These methods aim at providing a robust identification of the detected
image features, which facilitates their recognition in case that they are re-observed. In our case, we are mainly interested in feature detection and
less in feature description. A discussion of many existing methods in both fields can be found in the extensive survey published by Tuytelaars and
Mikolajczyk [18]. Furthermore, a thorough evaluation of several of these methods was published by Gauglitz et al. [17].
Various existing feature detectors extract image corners. Corners serve well as image features as they can be easily identified and their position
can generally be located with good accuracy. Furthermore, image corners can still be identified as such if the image is rotated, or the scale or
scene illumination are changed. Hence, a reliable corner detector can provide features with high repeatability.
Figure 4: (a) Input image and features form (b) Harris detector, (c) FAST and (d) SURF. [13]
One less recent but still popular method for corner detection is the Harris detector (Harris and Stephens, 1988). An example for the performance of
this method can be seen in Figure 4b. A computationally less expensive method for detecting image corners is the Smallest Univalue Segment
Assimilating Nucleus (SUSAN) detector that was proposed by Smith and Brady [19].
(b)
(c)
(d)
(a)
6. 6/10
A more advanced method that is similar to the SUSAN detector is Features from Accelerated Segment Test (FAST), for which an example is
shown in Figure 4c. One of the most influential methods in this category is the Scale Invariant Feature Transform (SIFT) by Lowe [20]. For this
method, two Gaussian convolutions with different values for σ are computed for the input image.
A more time-efficient blob detector that was inspired by SIFT, is Speeded-Up Robust Features (SURF) by Bay et al. [21], for which an example is
shown in Figure 4d. Instead of using a DoG for detecting feature locations, Bay et al. rely on the determinant of the Hessian matrix, which is known
from the Hessian-Laplace detector (Mikolajczyk and Schmid [22]). Both SIFT and SURF exhibit a very high repeatability, as it has been shown by
Gauglitz et al. [17]. However, what Gauglitz et al. also have demonstrated is that both methods require significant computation time.
In this research we are going to address this gap as well, by designing a slightly modified architecture of FAST corner detection algorithm.
(4) Modeling the overall framework:
These three key elements of our trajectory estimation system will be supplemented by other filters, snippets and SfM modules namely the
Simultaneous Localization and Mapping (SLAM) as depicted in Figure 5.
Figure 5: Processing pipeline of the our proposed FPGA-implementation
The overall architecture will be synthesized on reconfigurable hardware, consisting of field programmable gate arrays (FPGAs) [23], [24]. These
platforms promise to be adequate system building block in the building of sophisticated devices at affordable cost. They offer heavy parallelism
capabilities, considerable gate counts, and comes in low-power packages [25], [26], [27], [28].
Based on the existing work limitations, this project is concerned with an efficient implementation of trajectroy estimation for automous navigation of
unmanned vehicles, with a special interest on aerial ones. Figure 6 shows the anatomy of the projected overall system to be implemented. It is our
aspired target to realize such architecture and put it into application in different fields of life, like in aerial imaging, shipping parcels, search &
reconnaissance mission and many more. Moreover, this project will avail us of a locally-built solution that will not be bound to foreign royalties or at
risk of patent infringements claims.
Figure 6: System implementation of the processing at the MAV physical level
(b) References
[1] S. Palmisano, B. Gillam, D. G. Govan, R. S. Allison, and J. M. Harris, "Stereoscopic perception of real depths at large distances," Journal
of Vision, vol. 10, no. 6, pp. 19–19, Jun. 2010.
[2] R. Klette, Concise Computer Vision, 2014th ed. London: Springer.
[3] D. C. Brown, "Decentering distortion of lenses," Photometric Engineering, vol. 32, no. 3, pp. 444–462, 1966.
[4] Itseez, "OpenCV," 2015. [Online]. Available: http://opencv.org. Accessed: Apr. 2, 2016.
[5] J. Y. Bouguet, "Camera Calibration Toolbox for MATLAB," 2013. [Online]. Available: http://vision.caltech.edu/. Accessed: Mar. 3, 2016.
[6] M. Achtelik, T. Zhang, K. Kuhnlenz, and M. Buss, "Visual tracking and control of a quadcopter using a stereo camera system and inertial
sensors," IEEE, 2012, pp. 2863–2869.
[7] D. Pebrianti, F. Kendoul, S. Azrad, W. Wang, And K. Nonami, "Autonomous hovering and landing of a Quad-rotor micro aerial vehicle by
means of on ground stereo vision system," Journal of System Design and Dynamics, vol. 4, no. 2, pp. 269–284, 2010.
Feature Detection Stereo Matching Local SLAM EKF Sensor Fusion
Low-Level Process High-Level Process
PoseEstimation
low-level flight
control software
Microprocessor Board
FPGA SoC
PID Controller
Serial
Link
IMU
Pose
Attitude
PIXHAWK Cheetah
Greyscale Cameras Propellers of the MAV
Baseline = 11cm
I2
C
Bus
USB
Port
Quadrotor
(4) Motors
Controller
7. 7/10
[8] L. R. García Carrillo, A. E. Dzul López, R. Lozano, and C. Pégard, "Combining stereo vision and inertial navigation system for a Quad-
Rotor UAV," Journal of Intelligent & Robotic Systems, vol. 65, no. 1-4, pp. 373–387, Aug. 2011.
[9] T. Tomic et al., "Toward a fully autonomous UAV: Research platform for indoor and outdoor urban search and rescue," IEEE Robotics &
Automation Magazine, vol. 19, no. 3, pp. 46–56, Sep. 2012.
[10] A. Harmat, I. Sharf, and M. Trentini, "Parallel tracking and mapping with multiple cameras on an unmanned aerial vehicle," in Intelligent
Robotics and Applications. Springer Science + Business Media, 2012, pp. 421–432.
[11] M. Nieuwenhuisen, D. Droeschel, J. Schneider, D. Holz, T. Läbe, and S. Behnke, "Multimodal obstacle detection and collision avoidance
for micro aerial vehicles," IEEE, pp. 12–7.
[12] S. Shen, Y. Mulgaonkar, N. Michael, and V. Kumar, "Vision-based state estimation for autonomous rotorcraft MAVs in complex
environments," IEEE, 2010, pp. 1758–1764.
[13] K. Schauwecker and A. Zell, "On-board dual-stereo-vision for the navigation of an autonomous MAV," Journal of Intelligent & Robotic
Systems, vol. 74, no. 1-2, pp. 1–16, Oct. 2013.
[14] K. Schauwecker and R. Klette, "A comparative study of Two vertical road Modelling techniques," in Computer Vision – ACCV 2010
Workshops. Springer Science + Business Media, 2011, pp. 174–183.
[15] K. Schauwecker, S. Morales, S. Hermann, and R. Klette, "A comparative study of stereo-matching algorithms for road-modeling in the
presence of windscreen wipers," IEEE, 2009, pp. 12–7.
[16] C. Schmid, R. Mohr, and C. Bauckhage, International Journal of Computer Vision, vol. 37, no. 2, pp. 151–172, 2000.
[17] S. Gauglitz, T. Höllerer, and M. Turk, "Evaluation of interest point detectors and feature Descriptors for visual tracking," International
Journal of Computer Vision, vol. 94, no. 3, pp. 335–360, Mar. 2011.
[18] T. Tuytelaars and K. Mikolajczyk, "Local invariant feature detectors: A survey," Foundations and Trends® in Computer Graphics and
Vision, vol. 3, no. 3, pp. 177–280, 2007.
[19] S. M. Smith and J. M. Brady, International Journal of Computer Vision, vol. 23, no. 1, pp. 45–78, 1997.
[20] D. G. Lowe, "Object recognition from local scale-invariant features," vol. 2, IEEE, 1999, pp. 1150–2.
[21] Bay, Herbert, Tinne Tuytelaars, and Luc Van Gool. “SURF: Speeded up Robust Features.” Lecture Notes in Computer Science. N.p.:
Springer Science + Business Media, 2006. 404–417. Web.
[22] Mikolajczyk, K., and C. Schmid. Indexing based on scale invariant interest points. IEEE, 2001. Web. 4 Apr. 2016.
[23] P. Dang, “VLSI architecture for real-time image and video processing systems," Journal of Real-Time Image Processing, vol. 1, pp. 57–
62, 2006.
[24] T. Todman, G. Constantinides, S. Wilton, O. Mencer, W. Luk, and P. Cheung, “Reconfigurable computing: architectures and design
methods," Computers and Digital Techniques, IEE Proceedings, Vol. 152, No. 2, pp. 193–207, 2005.
[25] A. Ahmad, B. Krill, A. Amira, and H. Rabah, “Efficient architectures for 3-D HWT using dynamic partial reconfiguration," Journal of
Systems Architecture, Vol. 56, No. 8, pp. 305–316, 2010.
[26] A. Ahmad, B. Krill, A. Amira, and H. Rabah, “3-D Haar wavelet transform with dynamic partial reconfiguration for 3-D medical image
compression," in Biomedical Circuits and Systems Conference, 2009. BioCAS 2009. IEEE, 2009, pp. 137–140.
[27] A. Ahmad and A. Amira, “Efficient reconfigurable architectures for 3-D medical image compression," in Field-Programmable Technology,
2009. FPT 2009. International Conference on, 2009, pp. 472–474.
[28] B. Krill, A. Ahmad, A. Amira, and H. Rabah, “An efficient FPGA-based dynamic partial reconfiguration design flow and environment for
image and signal processing IP cores," Signal Processing: Image Communication, Vol. 25, No. 5, pp. 377–387, 2010.
8. 8/10
Metodologi
Methodology
1. Overview
In our research approach, we decided to make MATLAB our main research workhorse instead of a direct Hardware Description Language (HDL)
prototyping strategy which has proven to be laborious, unreliable and rather obsolete.
This choice of MATLAB remains evident after we examined and concluded that the bulk of our two computer vision techniques (i.e: Stereo Vision Depth
and SfM) consist of heavy mathematical processes and algorithms.
The two processes can be intertwined very efficiently in Simulink, where we can graphically abstract, visualize and reconfigure both systems chain,
then handover the task to MATLAB for further research.
We intend to build our prototype SoC around a Xilinx mid-range FPGA. These are known to have a better support for the MATLAB integration, and for
having advanced computer vision readiness, especially with the release of the new Xilinx Design Suite renamed “Vivado”, and driven by the power of
Xilinx Development Boards.
2. Project Flow Graph
In a step-wise outline, we are planning the following research methodology path, as shown in Figure 7.
Figure 7: Project flow graph
Identification of the major
components of the Safe Trajectory
Estimation block
Dissection: break in the main
algorithms into discrete fragments
(to be compatible for RTL design)
Virtualization: write each fragment
on MATLAB (plain code) / Simulink
(GUI)
Validation: verify the execution of
each algorithm using MATLAB /
Simulink
Simulation: run a test bench for
the critical situations encountered
Assemble & Deploy the resulting
design into one single SoC
deliverable prototype
Design & Synthesis: define the
way to export into FPGA fabric
PASS
Benchmarking: perform
measurements and comparisons in
order to gauge the efficiency
(metrics) of our SW / HW solutions
No
Yes
9. 9/10
Hasil Penyelidikan
Research Outcomes
1. Jangkaan Hasil Kajian
Expected Research Outcomes
The autonomous navigation of unmanned vehicles in general is the end result that this research aspires to achieve. Once it reaches its completion, this
research project is expected also to lead to the following results and deliverables:
(1) A full-fledged System-on-Chip for safe trajectory estimation for autonomous driving of unmanned vehicles.
(2) An advanced on-board architecture to navigate unmanned vehicles without human intervention.
(3) Optimized execution of Stereo Vision Depth and SfM processes on hardware platform.
(4) Defining of technical limitations in the Stereo Vision Depth and SfM algorithms for hardware platforms.
(5) Elaboration of novel techniques to identify surrounding objects using computer vision.
(6) A description of the system taxonomy with a set of recommendations on best design practices for subsequent works.
2. Hasil Kajian Terkini
Latest Research Outcomes
As indicated in the methodology chart, the first key task in our research was the identification of the different components of the system and how they
correlate together. This was one of the primordial works in any design flow, which required drafting the block diagram that will compose the backbone of
our overall system.
Based on the theory and literature experience accumulated during the first period of this research, we have came out will an all-encompassing block
diagram that is workable and which exhibit high coherence between the two main modules: Stereo Vision Depth and SfM within its anatomy.
This configuration in Figure 8 intertwines both core modules with the peripheral of the unmanned vehicle and with the other essential mathematical and
control processes. Figure 8 shows how this block diagram has been designed.
Figure 8: Block diagram of the overall design
As discussed previously, we have set up two cameras in a stereo configuration with 11cm of baseline. To begin using MATLAB in the process of
recovering depth from camera images, we elaborated the following code, that compute and compare two or more views of the same scene.
For the experimentation purpose, we have taken two still pictures of the VASYD Lab at UTHM (Malaysia), using our camera. The pictures were of the
same scene, but taken from two points of view, 11cm apart in the horizontal line, just like it would be if we had two camera mounted in a stereo
configuration. The output of this experimentation is a 3-D point cloud, where each 3-D point corresponds to a pixel in one of the images.
Stereo image rectification projects images onto a common image plane in such a way that the corresponding points have the same row coordinates.
This process is useful for stereo vision, because the 2-D stereo correspondence problem reduces to a 1-D problem:
Onboard
Controller
IMU
Camera 1
Stereo Vision Depth
Camera 2
Obstacle Avoidance
EKF Data
Fusion
SLAM Planner
LQR
Controller
Structure
from Motion FPGA
UAV
10. 10/10
VASYD_left.jpg VASYD_right.jpg
(1) Load the stereoParameters object, which is the result of calibrating the camera using either the stereoCameraCalibrator app or the
estimateCameraParameters function:
% Load the stereoParameters object.
load('VASYDStereoParams.mat');
% Visualize camera extrinsics.
showExtrinsics(stereoParams);
(2) Create Video File Readers and the Video Player:
Create System Objects for reading and displaying the video.
videoFileLeft = 'VASYD_left.avi';
videoFileRight = 'VASYD_right.avi';
readerLeft = vision.VideoFileReader(videoFileLeft, 'VideoOutputDataType', 'uint8');
readerRight = vision.VideoFileReader(videoFileRight, 'VideoOutputDataType', 'uint8');
player = vision.DeployableVideoPlayer('Location', [20, 400]);
(3) Read and Rectify Video Frames:
The frames from the left and the right cameras must be rectified in order to compute disparity and reconstruct the 3-D scene. Rectified images have
horizontal epipolar lines, and are row-aligned. This simplifies the computation of disparity by reducing the search space for matching points to one
dimension. Rectified images can also be combined into an anaglyph, which can be viewed using the stereo red-cyan glasses to see the 3-D effect.
frameLeft = readerLeft.step();
frameRight = readerRight.step();
[frameLeftRect, frameRightRect] = rectifyStereoImages(frameLeft, frameRight, stereoParams);
figure;
imshow(stereoAnaglyph(frameLeftRect, frameRightRect));
title('Rectified Video Frames');
(4) Compute Disparity:
In rectified stereo images any pair of corresponding points are located on the same pixel row. For each pixel in the left image compute the distance to
the corresponding pixel in the right image. This distance is called the disparity, and it is proportional to the distance of the corresponding world point
from the camera.
frameLeftGray = rgb2gray(frameLeftRect);
frameRightGray = rgb2gray(frameRightRect);
disparityMap = disparity(frameLeftGray, frameRightGray);
figure;
imshow(disparityMap, [0, 64]);
title('Disparity Map');
colormap jet
colorbar
(5) Reconstruct the 3-D Scene:
Reconstruct the 3-D world coordinates of points corresponding to each pixel from the disparity map.
points3D = reconstructScene(disparityMap, stereoParams);
% Convert to meters and create a pointCloud object
points3D = points3D ./ 1000;
ptCloud = pointCloud(points3D, 'Color', frameLeftRect);
% Create a streaming point cloud viewer
player3D = pcplayer([-3, 3], [-3, 3], [0, 8], 'VerticalAxis', 'y', 'VerticalAxisDir', 'down');
% Visualize the point cloud
view(player3D, ptCloud);
(6) Process the Rest of the Video:
Apply the steps described above to in every frame of the video, when we will be using a frame grabber which is not the case in this experimentation.
11. 11/10
while ~isDone(readerLeft) && ~isDone(readerRight)
% Read the frames.
frameLeft = readerLeft.step();
frameRight = readerRight.step();
% Rectify the frames.
[frameLeftRect, frameRightRect] = rectifyStereoImages(frameLeft, frameRight, stereoParams);
% Convert to grayscale.
frameLeftGray = rgb2gray(frameLeftRect);
frameRightGray = rgb2gray(frameRightRect);
% Compute disparity.
disparityMap = disparity(frameLeftGray, frameRightGray);
% Reconstruct 3-D scene.
points3D = reconstructScene(disparityMap, stereoParams);
points3D = points3D ./ 1000;
ptCloud = pointCloud(points3D, 'Color', frameLeftRect);
view(player3D, ptCloud);
% Display the frame.
step(player, dispFrame);
end
% Clean up.
reset(readerLeft);
reset(readerRight);
release(player);
Kemajuan Penyelidikan
Research Progress
BAB
(CHAPTER)
PERATUS SIAP
(PERCENTAGE OF COMPLETION)
CATATAN
(REMARKS)
INTRODUCTION
LITERATURE REVIEW
RESEARCH METHODOLOGY
RESULTS & ANALYSIS
CONCLUSION
50 %
70 %
40 %
20 %
10 %
The final thesis introduction will take most of its source from this original
introduction, plus some final editing.
The literature review is an ongoing process, and we expect this chapter
to be edited frequently as with every advent around photogrammetry.
Our methodology flowchart has been set up, and the algorithms
conception and virtualization did start, in parallel with their evaluation.
The result & analysis advancement so far are in the form of block
diagrams identification and few MATLAB experiments.
Nota: Sila masukkan lampiran jika ruangan tidak mencukupi
Note: Please use an attachment if the provided space is not enough
Masalah Yang Memberi Kesan Kepada Kemajuan Penyelidikan Dan Langkah Yang Diambil Bagi Mengatasi Masalah Tersebut.
Problems That Affect Research Progress And Remedial Actions Taken To Resolve It
The research has just rolled out, but this part is by far the most decisive because here we will define our direction for the next year. The problem in that
sens, was to come to a decision about the right approach to use for our safe trajectory estimation project. The whole project can fall apart if we don’t
pick carefully the techniques and components to be used, and if we fall to over-ambitious and wishful thinking.
To be sure we were on the right path, we have reviewed tens of similar projects done around the world, and determined where they have stalled in the
design process, and what errors have they committed, so that we don’t make them in ours.
Their recommendations regarding the importance of the stereo image stability, offline localization and power consumption was deterministic for us to
choose the Structure from Motion technique as a way to mitigate the irregularities that may be handed by Stereo Vision Depth technique when working
alone. For the issue of power consumption, we were tempted to try a novel configuration of Compressed Sensing (CS) using Orthogonal Matching
Pursuit which researchers have successfully integrated on FPGA.
A trivial problem however was on the technicalities of my MATLAB version, as I had to find and download a handful of missing libraries and a particular
toolbox that is essential for the Stereo Vision Depth technique. I also had to make sure there was no litigant matter regarding the patenting right of the
authors of those libraries, because it may cause royalties paying or compensations if we come to use them in our FPGA prototype without consent of
their intellectual ownership body.
12. 12/10
The same problem was taken care of, when it came to the standard stereo images to use to calibrate or benchmark our Stereo Vision Depth algorithm.
These images are recognized by the computer vision community of researchers as ideal to test, gauge and compare the Stereo Vision modules, but
their usage should be free and legal before we can use them in our test bench in future.
The problem of finding the right baseline between the two cameras was also tackled. We had to resort to a nuts-and-bolt reasoning of the stereo
camera configuration, and tune the distance until we get an acceptable and close to accurate distance determined between the camera plane and the
objects being investigated on the scene.
Putting the MATLAB code together was another ordeal, because of the many issues pertaining to mathematical multiplications of matrix versus vectors.
This goes back to a 3-D or a 2-D multiplication. It was primordial to know which multiplication is which, in order to obtain the valid result, and not be
tricked by a wrong result that would seem correct as well.
BAHAGIAN D
PART D
AKTIVITI PELAJAR
STUDENT ACTIVITIES
Pembentangan Kertas Kerja, Menghadiri Seminar, dll.
Papers presented, seminar attended, etc.
1. FKEE Hari Transformasi Minda:
- Poster Presentation: With a poster titled "Simulation & Analysis of Different DCT Techniques for Image Compression on MATLAB"
2. Publisher's Talk: Research Best Practices (Dr Wong Woei Fuh) @UTM Skudai
3. Chairman Lecture Series : "The Importance of Practical Engineer In The Industry" @Sultan Ibrahim Banquet Hall, DSI, UTHM
4. Malaysian Technical Universities Conference on Engineering & Technology 2015 (MUCET2015) @KSL JB (for attending the keynote speeches)
5. Short course : Health Monitoring of Civil Structure @UTM Skudai (Faculty of Biomedical)
6. Making HR Technology Relevant to Your Organization @Thistle Hotel JB
7. SolidWorks Innovation Day @Malacca (full-day training)
8. WIEF 2015 (11th World Islamic Economic Forum) - as a delegate representing Morocco @KLCC
9. 2nd IdeaPad (side event of WIEF 2015) - as a presenter of my PowerKasut non-credit project @KLCC
10. Impact & Insights Dialogue: Making an Impact on Education by Hong Leong Foundation @KL
11. 1 AES (ASEAN Entrepreneurship Summit 2015) - in conjunction with ASEAN Summit in Malaysia @KL
12. Social Entrepreneurship Bootcamp by MaGIC (a side event of 1AES) - 3-days workshop where we transform a social idea into business model.
13. CEO Faculty Programme - A talk by Dato' Wei Chuan Beng, Founder of RedONE : The Journey to Entrepreneurship @UTHM
14. Week of LabVIEW Webcast Series (5 sessions of 30 mins each) @National Instruments ASEAN
15. Talk by Prof Simone Hochgrab (Cambridge Univ.): Advances in reacting flow measurements @UTM Skudai
16. Workshop: Characteristic of A Good Literature Review by Prof Abu Bakar bin Mohammad @UTM Skudai (FKE)
17. UTHM Chairman Lecture Series: Building Info Modelling in Facilities Management Perspectives (By Director of Microcorp Technology Sdn Bhd)
18. Transformasi Minda Mahasiswa: Course on Design Technique for 3-D Printing @UTHM (FKEE)
19. How to Finish your Master or PhD Without Correction @Seminar Room, ORICC, UTHM
20. 2016 Offshore Technology Conference Asia (OTC Asia) @KLCC (as a visitor)
21. Wacana Ilmu Siri 1: Understanding Scopus, Google Scholar And ISI Web Of Science @UTHM
22. Seminar Pemeriksa Luar FKEE: Technopreneurship - From Student Project to Startup to Public Listed Company (by Prof Ahmad Fadzli Hani)
23. 11th ITU Symposium on ICT, Environment and Climate Change @Renaissance Hotel Kuala Lumpur
24. Datacloud South East Asia Forum @Zon Regency Hotel, JB
25. Talk by Ir. Shaik Abdul Wahab bin Dato Hj Rahim Director of GEA Sdn Bhd: Site Investigation @UTHM
26. International Seminar on Power and Communication Engineering (iSPACE2016) by FKEE @UTHM
27. 1st FKEE PG Research Conference (1st
FKEE PG ResConf) by FKEE @UTHM (Presenting an Article, Poster, and an Oral Presentation)
13. 13/10
Kegiatan Bukan Akademik :
Non-Academic Activities
1. Youth Trailblazers Challenge 2015 @UTM Skudai
2. ALIC (Arabic Language Intensive Course) - I teach Arabic language basics to a class for a 6-hours day class @UTM Skudai (Faculty of Islamic
Civilization)
3. Malaysians United Run 2015 - Anjuran Institut Onn Jaafar (IOJ) @KL
4. Kolej Kediaman Perwira's Festival Keamanan:
- Bengkel Bahasa Perancis - I teach French basics for a 2-hours night class
5. ICMF 2015 (International Cultural Mega Festival) - as an organizing commitee member @UTM Skudai
6. Kolej Kediaman Perwira's Aktiviti "Gembur Kasih 5.0" @KK Perwira Taman Simbiosis
7. UTHM International Cultural Evening @Sultan Ibrahim Banquet Hall, DSI, UTHM
8. FKEE Jalinan Muafakat 2.0 @Padang B (Padang Ragbi), UTHM
9. Temasya Sukan Perwira @UTHM Stadium
10. UTHM Radio: Invited 3 times to talk (3 hours slots each):
- Sharing tips on how to improve English, and my experience being abroad.
Penganugerahan / Penghargaan :
Recognitions / Awards
1. FKEE Hari Transformasi Minda:
- 2 Minutes Idea: Winner of both 1st & 2nd Place.
2. Kolej Kediaman Perwira's Festival Keamanan:
- Larian Keamanan - I arrived in 4th position in this cross country run around KK Perwira vicinities.
3. 3 Minutes Thesis Competition 2016:
- Winner of 1st Place (Master Students Category)
4. Pidato Antarabangsa Bahasa Melayu Piala Perdana Menteri (PABM) in Putrajaya:
- Top 15 in Malaysia (International Students Category)
5. 2nd Kazan OIC Entrepreneurship Forum (International Competition) in Kazan, Republic of Tatarstan, Russia:
- Selected to pitch in front of the president of the Republic of Tatarstan.