Unmanned aerial vehicles (UAVs) have great potential for monitoring, detecting, and fighting forest fires due to their maneuverability, operational range, and improved safety over manned aircraft. This paper reviews technologies for UAV-based forest fire monitoring, detection, and fighting, which have progressed over the last decade. Section 1 discusses the advantages of UAVs for this application. Section 2 provides an overview of existing UAV systems for forest fire monitoring, detection, and fighting. Section 3 focuses on key technologies such as fire detection and image processing, as well as challenges. The conclusion suggests future directions for computer vision-based forest firefighting using UAVs.
Analysis of Fall Detection Systems: A Reviewijtsrd
Since falls are a major public health problem among older people, the number of systems aimed at detecting them has increased dramatically over recent years. This work presents an extensive literature review of fall detection systems, including comparisons among various kinds of studies. It aims to serve as a reference for both clinicians and biomedical engineers planning or conducting field investigations. Challenges, issues and trends in fall detection have been identified after the reviewing work. The number of studies using context aware techniques is still increasing but there is a new trend towards the integration of fall detection into smart phones as well as the use of machine learning methods in the detection algorithm. We have also identified challenges regarding performance under real life conditions, usability, and user acceptance as well as issues related to power consumption, real time operations, sensing limitations, privacy and record of real life falls. Nikita Vidua | Prof. Avinash Sharma "Analysis of Fall Detection Systems: A Review" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-1 , December 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29467.pdfPaper URL: https://www.ijtsrd.com/computer-science/other/29467/analysis-of-fall-detection-systems-a-review/nikita-vidua
REMOTE SENSING AND GEOGRAPHIC INFORMATION SYSTEMS AM Publications
Remote sensing technology's increasing accessibility helps us observe research and learn about our globe in ways we could only imagine a generation ago. Guides to profound knowledge of historical, conceptual and practical uses of remote sensing which is increasing GIS technology. This paper will go briefly through remote sensing benefits, history, technology and the GIS and remote sensing integration and their applications. Remote sensing (RS) is used in mapping the predicted and actual species and dominates the ecosystem canopy.
White Spot Syndrome Virus Detection in Shrimp Images using Image Segmentation...Muni Sankar
This document describes a method for detecting white spot syndrome virus (WSSV) in shrimp using image segmentation techniques. It captures images of shrimp using a camera system and preprocessing techniques like histogram adjustment and wavelet denoising. It then segments the images using k-means clustering to group pixels into clusters representing healthy and infected shrimp. Edge detection is also used to extract structural features for identification. The method aims to provide accurate and fast WSSV detection compared to other techniques like biosensors. It presents results of the image segmentation and clustering steps on sample shrimp images, identifying clusters corresponding to different conditions.
- Drone data and big spatial data can be used as input for species distribution modelling, but requires additional processing to extract useful species and environmental data from images.
- Digital image processing techniques can be used to obtain information on vegetation types and indices from drone images.
- Running a species distribution model in the Biodiversity and Climate Change Virtual Laboratory involves selecting species occurrence data, environmental layers, a modelling algorithm, and evaluating the results. Climate change projections can then be run to predict impacts on suitable habitat into the future.
To Establish Evacuation Decision-Making Selection Modes of Aboriginal Tribes ...IJERA Editor
In this study I try to utilize the concepts of ―environmental vulnerability‖ and ―evacuation behaviors among minority groups‖ and apply the evacuation selection mode generated from the public hazard perception to geographic information system, and analyze movement paths of residents during after disaster by using composite technology so that I can modify the suggested service scope and capacity of evacuation sites in the regions investigated in this study and provide minority groups with optimal selection mode.
SkinCure: An Innovative Smart Phone Based Application to Assist in Melanoma E...sipij
Melanoma spreads through metastasis, and therefore it has been proven to be very fatal. Statistical
evidence has revealed that the majority of deaths resulting from skin cancer are as a result of melanoma.
Further investigations have shown that the survival rates in patients depend on the stage of the infection;
early detection and intervention of melanoma implicates higher chances of cure. Clinical diagnosis and
prognosis of melanoma is challenging since the processes are prone to misdiagnosis and inaccuracies due
to doctors’ subjectivity. This paper proposes an innovative and fully functional smart-phone based
application to assist in melanoma early detection and prevention. The application has two major
components; the first component is a real-time alert to help users prevent skin burn caused by sunlight; a
novel equation to compute the time for skin to burn is thereby introduced. The second component is an
automated image analysis module which contains image acquisition, hair detection and exclusion, lesion
segmentation, feature extraction, and classification. The proposed system exploits PH2 Dermoscopy image
database from Pedro Hispano Hospital for development and testing purposes. The image database
contains a total of 200 dermoscopy images of lesions, including normal, atypical, and melanoma cases.
The experimental results show that the proposed system is efficient, achieving classification of the normal,
atypical and melanoma images with accuracy of 96.3%, 95.7% and 97.5%, respectively.
The document lists Shamik Tiwari's research publications and academic activities. It includes 5 published journal articles, 1 book chapter, and 2 accepted journal articles. It also lists that he coordinates academic monitoring, curriculum development, guides Ph.D. students, delivered lectures, and serves as a reviewer for several journals. He has also completed many online courses and achieved high student feedback for his online teaching.
Mark R. Yoder's research focuses on developing science and technology for disaster risk reduction, including hazard forecasting, remote sensing for damage assessment, and understanding natural hazards through computational models and data analysis. His work applies these approaches especially to earthquake hazards through developing the Virtual Quake simulator and using machine learning on simulated and remote sensing data. The overarching goal is to advance fundamental science while creating practical tools to improve public safety during natural disasters.
Analysis of Fall Detection Systems: A Reviewijtsrd
Since falls are a major public health problem among older people, the number of systems aimed at detecting them has increased dramatically over recent years. This work presents an extensive literature review of fall detection systems, including comparisons among various kinds of studies. It aims to serve as a reference for both clinicians and biomedical engineers planning or conducting field investigations. Challenges, issues and trends in fall detection have been identified after the reviewing work. The number of studies using context aware techniques is still increasing but there is a new trend towards the integration of fall detection into smart phones as well as the use of machine learning methods in the detection algorithm. We have also identified challenges regarding performance under real life conditions, usability, and user acceptance as well as issues related to power consumption, real time operations, sensing limitations, privacy and record of real life falls. Nikita Vidua | Prof. Avinash Sharma "Analysis of Fall Detection Systems: A Review" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-1 , December 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29467.pdfPaper URL: https://www.ijtsrd.com/computer-science/other/29467/analysis-of-fall-detection-systems-a-review/nikita-vidua
REMOTE SENSING AND GEOGRAPHIC INFORMATION SYSTEMS AM Publications
Remote sensing technology's increasing accessibility helps us observe research and learn about our globe in ways we could only imagine a generation ago. Guides to profound knowledge of historical, conceptual and practical uses of remote sensing which is increasing GIS technology. This paper will go briefly through remote sensing benefits, history, technology and the GIS and remote sensing integration and their applications. Remote sensing (RS) is used in mapping the predicted and actual species and dominates the ecosystem canopy.
White Spot Syndrome Virus Detection in Shrimp Images using Image Segmentation...Muni Sankar
This document describes a method for detecting white spot syndrome virus (WSSV) in shrimp using image segmentation techniques. It captures images of shrimp using a camera system and preprocessing techniques like histogram adjustment and wavelet denoising. It then segments the images using k-means clustering to group pixels into clusters representing healthy and infected shrimp. Edge detection is also used to extract structural features for identification. The method aims to provide accurate and fast WSSV detection compared to other techniques like biosensors. It presents results of the image segmentation and clustering steps on sample shrimp images, identifying clusters corresponding to different conditions.
- Drone data and big spatial data can be used as input for species distribution modelling, but requires additional processing to extract useful species and environmental data from images.
- Digital image processing techniques can be used to obtain information on vegetation types and indices from drone images.
- Running a species distribution model in the Biodiversity and Climate Change Virtual Laboratory involves selecting species occurrence data, environmental layers, a modelling algorithm, and evaluating the results. Climate change projections can then be run to predict impacts on suitable habitat into the future.
To Establish Evacuation Decision-Making Selection Modes of Aboriginal Tribes ...IJERA Editor
In this study I try to utilize the concepts of ―environmental vulnerability‖ and ―evacuation behaviors among minority groups‖ and apply the evacuation selection mode generated from the public hazard perception to geographic information system, and analyze movement paths of residents during after disaster by using composite technology so that I can modify the suggested service scope and capacity of evacuation sites in the regions investigated in this study and provide minority groups with optimal selection mode.
SkinCure: An Innovative Smart Phone Based Application to Assist in Melanoma E...sipij
Melanoma spreads through metastasis, and therefore it has been proven to be very fatal. Statistical
evidence has revealed that the majority of deaths resulting from skin cancer are as a result of melanoma.
Further investigations have shown that the survival rates in patients depend on the stage of the infection;
early detection and intervention of melanoma implicates higher chances of cure. Clinical diagnosis and
prognosis of melanoma is challenging since the processes are prone to misdiagnosis and inaccuracies due
to doctors’ subjectivity. This paper proposes an innovative and fully functional smart-phone based
application to assist in melanoma early detection and prevention. The application has two major
components; the first component is a real-time alert to help users prevent skin burn caused by sunlight; a
novel equation to compute the time for skin to burn is thereby introduced. The second component is an
automated image analysis module which contains image acquisition, hair detection and exclusion, lesion
segmentation, feature extraction, and classification. The proposed system exploits PH2 Dermoscopy image
database from Pedro Hispano Hospital for development and testing purposes. The image database
contains a total of 200 dermoscopy images of lesions, including normal, atypical, and melanoma cases.
The experimental results show that the proposed system is efficient, achieving classification of the normal,
atypical and melanoma images with accuracy of 96.3%, 95.7% and 97.5%, respectively.
The document lists Shamik Tiwari's research publications and academic activities. It includes 5 published journal articles, 1 book chapter, and 2 accepted journal articles. It also lists that he coordinates academic monitoring, curriculum development, guides Ph.D. students, delivered lectures, and serves as a reviewer for several journals. He has also completed many online courses and achieved high student feedback for his online teaching.
Mark R. Yoder's research focuses on developing science and technology for disaster risk reduction, including hazard forecasting, remote sensing for damage assessment, and understanding natural hazards through computational models and data analysis. His work applies these approaches especially to earthquake hazards through developing the Virtual Quake simulator and using machine learning on simulated and remote sensing data. The overarching goal is to advance fundamental science while creating practical tools to improve public safety during natural disasters.
FFO: Forest Fire Ontology and Reasoning System for Enhanced Alert and Managem...dannyijwest
Forest fires or wildfires pose a serious threat to property, lives, and the environment. Early detection and mitigation of such emergencies, therefore, play an important role in reducing the severity of the impact caused by wildfire. Unfortunately, there is often an improper or delayed mechanism for forest fire detection which leads to destruction and losses. These anomalies in detection can be due to defects in sensors or a lack of proper information interoperability among the sensors deployed in forests. This paper presents a lightweight ontological framework to address these challenges. Interoperability issues are caused due to heterogeneity in technologies used and heterogeneous data created by different sensors. Therefore, through the proposed Forest Fire Detection and Management Ontology (FFO), we introduce a standardized model to share and reuse knowledge and data across different sensors. The proposed ontology is validated using semantic reasoning and query processing. The reasoning and querying processes are performed on real-time data gathered from experiments conducted in a forest and stored as RDF triples based on the design of the ontology. The outcomes of queries and inferences from reasoning demonstrate that FFO is feasible for the early detection of wildfire and facilitates efficient process management subsequent to detection.
This document discusses drone technology and its various environmental applications. It begins with an overview of different types of drones based on size and function. It then outlines many potential uses of drones for environmental monitoring, such as assessing forest health, tracking wildlife, monitoring fires and pollution, and precision agriculture. The document also notes that drones can serve important early warning functions and be deployed quickly in disaster situations. While regulatory concerns exist around privacy and misuse, the document argues that drones can be effective environmental management tools when used for public benefit.
IJRET-V1I1P1 - Forest Fire Detection Based on Wireless Image Processing ISAR Publications
This document proposes an intelligent forest fire detection system using image processing and artificial intelligence techniques. It discusses how traditional fire detection methods are inadequate and outlines the proposed system. The system would use temperature sensors, a GPS module, and small satellites to continuously monitor forests for fires, detect fires rapidly, and transmit location alerts to aid fire station response before fires spread severely. If a temperature threshold is exceeded, the system would transmit an alert along with the fire's GPS location to orbiting small satellites and ultimately to fire stations. The goal is early and accurate fire detection and localization to minimize ecological and economic damage from forest fires.
Dear Writer,Please do a PowerPoint presentation on my .docxrandyburney60861
Dear Writer,
Please do a PowerPoint presentation on my paper that was completed. I am attaching my paper so can have an idea what to do.
Please include speaker notes in the presentation.
Using PowerPoint develop a 22-slide presentation that demonstrates your mastery of the Program Outcomes by discussing the results of your Individual Project. The following slides should be presented:
1) Title Slide
2) Overview
3) Research Problem
4) Research Intent
5) Research Question(s)/Hypothesis
6) Literature Review Key Points
7) Methodology
8) Results
9) Conclusions
10) Recommendations
11) Summary
Chapter 1
Introduction
Preparation and response to natural disasters is a serious logistical challenge. Significant resources are used by intergovernmental, governmental, and non-governmental organizations to prepare and respond to the effects of natural disasters. When a natural disaster occurs, such organizations mobilize resources to respond. Recently, technological advancements in autonomous, semiautonomous, and unmanned vehicles have increased the utility of such vehicles while reducing costs. The increased use of UAVs has created a new dimension to Synthetic Aperture Radar (SAR) operations. In real life, the use of UAVs can be beneficial in cases where rapid decisions are required or the use of manpower is limited (Boehm et al., 2017).
Natural disasters have significantly damaged transportation infrastructure including railways and roads. Additionally, barrier lakes and landslides pose a serious threat to property and life in areas affected. When infrastructure is interrupted with heavy rescue equipment, rescue vehicles, suppliers and rescue teams face challenges to reach disaster-hit areas. As a result, efforts to provide humanitarian aid is hampered (Tatsidou et al., 2019). The traditional approaches of responding to natural disasters are unable to meet the requirements to support the process of disaster decision making. UAVs are well equipped to navigate areas affected by natural disasters and provide humanitarian aid.
This study aims to explore the viability of using the MQ-8B Fire Scout in providing humanitarian aid in areas affected by natural disasters. The document provides a literature review on the use of UAVs in providing humanitarian aid when natural disasters have occurred. The study compares the viability of using MQ-8B to MH-60 in conducting rescue operations in areas affected by disasters.
Significance of the Study
This study was conducted to determine the effectiveness of using UAVs in providing humanitarian aid in areas affected by natural disasters. The study assists in developing general knowledge and bridge the existing gap in providing humanitarian aid using UAVs. The findings of this study increase knowledge of the effectiveness of UAVs in responding to natural disasters and provide more insights on useful methods to respond to affected areas. Ultimately, these insights cou.
This document discusses how wireless sensor networks (WSNs) can help manage disasters. WSNs consist of low-cost, low-power sensor nodes that cooperate to sense the environment and communicate wirelessly. They are proposed as an alternative to satellite monitoring for disaster management. WSNs can provide early warnings of disasters and help search and rescue operations by locating victims. The document outlines key design considerations for using WSNs for disaster detection, monitoring and response, including how to deploy the sensors and which sensing modalities to use based on the type of disaster.
The document describes the development of an autonomous drone integrated with artificial intelligence and object detection capabilities. The drone is designed to follow a person or vehicle and respond during emergencies. It uses a Raspberry Pi computer along with cameras, sensors and a flight controller. The methodology involves building the drone, programming AI algorithms for object detection and tracking, and testing the autonomous functions. The goal is to create a user-friendly drone that can assist during emergencies or be used for surveillance through features like face recognition and response to detected signals.
Radar and optical remote sensing data evaluation and fusion; a case study for...rsmahabir
The recent increase in the availability of spaceborne radar in different wavelengths with multiple polarisations provides new opportunities for land surface analysis. This research effort explored how different radar data, and derived texture values, indepen- dently and in combination with optical imagery influence land cover/use classification accuracies for a study site in Washington, DC, USA. Two spaceborne radar images, Radarsat-2L-band and Palsar C-band quad-polarised radar, were registered with Aster optical data for this study. Traditional methods of classification were applied to various components and combinations of this data set, and overall and class-specific thematic accuracies obtained for comparison. The results for the two despeckled radar data sets were quite different, with Radarsat-2 obtaining an overall accuracy of 59% and Palsar 77%, while that of the optical Aster was 90%. Combining the original radar and a variance texture measure increased the accuracy of Radarsat-2 to 71% but that of Palsar only to 78%. One of the sensor fusions of optical and radar obtained an accuracy of 93%. For this location, radar by itself does not obtain classification accuracies as high as optical data, but fusion with optical imagery provides better overall thematic accuracy than the optical independently, and results in some useful improvements on a class-by-class basis. For those regions with high cloud cover, quad polarisation radar can independently provide viable results but it may be wavelength-dependent.
Unmaned Arieal Vehicle (System Engineering)Omkar Rane
This document is a report submitted by three students - Jayesh Suryawanshi, Omkar Rane, and Chaitanya Deshpande - at MIT Academy of Engineering Alandi, under the guidance of their professor Vinayak Balkrishna Kulkarni. The report discusses unmanned aerial vehicles (UAVs) and focuses on the NETRA drone system developed in India. It provides classifications of UAVs based on size and range/endurance. The report also describes the applications of UAVs, characteristics of the NETRA drone such as its dimensions, payload capacity, and performance specifications, and concludes with references.
Use of drone for field observation - Presentation of applications in emergenc...mapali
Presentation organized at the Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand, in the frame of our GeoHealth Thai Platform project.
This document outlines a proposed design for an SMS-based mobile health emergency geo-location notification system. It begins with an introduction discussing how mHealth applications can help address healthcare challenges such as response time and access. It then reviews existing emergency notification systems like basic 911, enhanced 911, and OnStar, noting limitations such as an inability to handle silent emergency calls. The document proposes designing a system that uses SMS to transmit location data from any mobile phone to ambulances, addressing issues with existing approaches. It presents research questions on creating a system that assists those with speech/hearing impairments and scales across technologies.
The document discusses natural disasters in India and Asia, and how remote sensing and GIS technologies can help mitigate disasters. Space technology, including communication and earth observation satellites, provides disaster warning, relief coordination, and data for pre- and post-disaster planning. GIS integrates spatial and non-spatial data and allows analysis for various stages of disaster management. Remote sensing provides historical data on past hazards to assess risk and aid mitigation planning.
Energy Efficient Animal Sound Recognition Scheme in Wireless Acoustic Sensors...ijwmn
Wireless sensor network (WSN) has proliferated rapidly as a cost-effective solution for data aggregation and measurements under challenging environments. Sensors in WSNs are cheap, powerful, and consume limited energy. The energy consumption is considered to be the dominant concern because it has a direct and significant influence on the application’s lifetime. Recently, the availability of small and inexpensive components such as microphones has promoted the development of wireless acoustic sensor networks (WASNs). Examples of WASN applications are hearing aids, acoustic monitoring, and ambient intelligence. Monitoring animals, especially those that are becoming endangered, can assist with biology researchers’ preservation efforts. In this work, we first focus on exploring the existing methods used to monitor the animal by recognizing their sounds. Then we propose a new energy-efficient approach for identifying animal sounds based on the frequency features extracted from acoustic sensed data. This approach represents a suitable solution that can be implemented and used in various applications. However, the proposed system considers the balance between application efficiency and the sensor’s energy capabilities. The energy savings will be achieved through processing the recognition tasks in each sensor, and the recognition results will be sent to the base station.
ENERGY EFFICIENT ANIMAL SOUND RECOGNITION SCHEME IN WIRELESS ACOUSTIC SENSORS...ijwmn
Wireless sensor network (WSN) has proliferated rapidly as a cost-effective solution for data aggregation and measurements under challenging environments. Sensors in WSNs are cheap, powerful, and consume limited energy. The energy consumption is considered to be the dominant concern because it has a direct and significant influence on the application’s lifetime. Recently, the availability of small and inexpensive components such as microphones has promoted the development of wireless acoustic sensor networks (WASNs). Examples of WASN applications are hearing aids, acoustic monitoring, and ambient intelligence. Monitoring animals, especially those that are becoming endangered, can assist with biology researchers’ preservation efforts. In this work, we first focus on exploring the existing methods used to monitor the animal by recognizing their sounds. Then we propose a new energy-efficient approach for identifying animal sounds based on the frequency features extracted from acoustic sensed data. This approach represents a suitable solution that can be implemented and used in various applications. However, the proposed system considers the balance between application efficiency and the sensor’s energy capabilities. The energy savings will be achieved through processing the recognition tasks in each sensor, and the recognition results will be sent to the base station.
El 12 de mayo de 2017 celebramos en la Fundación Ramó Areces una jornada con IS Global y Unitaid sobre enfermedades transmitidas por vectores, como la malaria, entre otras.
A BRIEF OVERVIEW ON DIFFERENT ANIMAL DETECTION METHODSsipij
Researches based on animal detection plays a very vital role in many real life applications. Applications
which are very important are preventing animal vehicle collision on roads, preventing dangerous animal
intrusion in residential area, knowing locomotive behavioural of targeted animal and many more. There
are limited areas of research related to animal detection. In this paper we will discuss some of these areas
for detection of animals.
2021 Top Ten Cited Article - International Journal of Wireless & Mobile Netwo...ijwmn
This document summarizes a research article that compares the performance of three routing protocols (DSDV, AODV, and DSR) in mobile ad hoc networks. The article uses a network simulator (NS-2) to evaluate the protocols based on metrics like throughput, packet delivery ratio, and average end-to-end delay. The results show that reactive protocols (AODV and DSR) generally outperform the proactive protocol (DSDV) due to lower control overhead and better adaptation to high mobility. DSR achieves the best performance overall by minimizing the number of required floods.
EXPLORING THE EFFECTIVENESS OF THE MQ-8B FIRE SCOUT TO PROVISION H.docxlmelaine
This document is a graduate capstone project submitted by Henry Vascones to Embry-Riddle Aeronautical University in partial fulfillment of a Master of Science in Aeronautics degree in 2020. The project explores the effectiveness of using the MQ-8B Fire Scout unmanned aerial vehicle to provide humanitarian aid in areas affected by natural disasters. The literature review discusses previous research on the origins and applications of UAVs, their use for cargo delivery, and the impacts of weather on UAV operations. The study aims to determine if UAVs like the MQ-8B provide a more expedient and cost-effective means of delivering humanitarian aid compared to traditional manned aircraft like the MH-60 Sea Hawk. It
EXPLORING THE EFFECTIVENESS OF THE MQ-8B FIRE SCOUT TO PROVISION H.docxmecklenburgstrelitzh
EXPLORING THE EFFECTIVENESS OF THE MQ-8B FIRE SCOUT TO PROVISION HUMANITARIAN EFFORTS POST NATURAL DISASTERS
by
Henry Vascones
A Graduate Capstone Project Submitted to the College of Aeronautics,
Department of Graduate Studies, in Partial Fulfillment
of the Requirements for the Degree of
Master of Science in Aeronautics
Embry-Riddle Aeronautical University
Worldwide Campus
March 2020
1
EXPLORING THE EFFECTIVENESS OF THE MQ-8B FIRE SCOUT TO PROVISION HUMANITARIAN EFFORTS POST NATURAL DISASTERS
by
Henry Vascones
This Graduate Capstone Project was prepared under the direction of the candidate’s
Graduate Capstone Project Chair, Dr. Jeremy Hodges,
Worldwide Campus, and has been approved. It was submitted to the
Department of Graduate Studies in partial fulfillment
of the requirements for the degree of
Master of Science in Aeronautics
Graduate Capstone Project:
_________________________________________
Jeremy Hodges, PhD.
Graduate Capstone Project Chair
March 2020
II
Acknowledgements
I would like to thank those who assisted and guided me throughout my time in the master’s program at Embry-Riddle Aeronautical University Worldwide.
III
Abstract
Scholar: Henry Vascones
Title: Exploring the Effectiveness of the MQ-8B Fire Scout to provision Humanitarian Efforts Post Natural Disasters
Institution: Embry-Riddle Aeronautical University
Degree: Master of Science in Aeronautics
Year: 2020
This study will explore the Unmanned Aerial Vehicles (UAVs) have been increasingly used for providing humanitarian aid during natural disasters. This study will evaluate the effectiveness of the Northrop Grumman MQ-8B Fire Scout in providing humanitarian aid after natural disasters have occurred. The ability to utilize the MQ-8B will be analyzed by determining their ability to conduct humanitarian in areas affected by natural disasters and are largely inaccessible using the existing traditional methods. The viability of using UAVs in such operations in terms of abilities and costs will be compared to using response utility trucks. The study will determine the viability of using UAVs in responding to natural disasters while at the same time providing economic benefits. The use of UAVs will be compared to existing response approaches such as the use of emergency response utility vehicles and manned flight. The study will also develop a model to show the costs and benefits of utilizing MQ-8B in responding to natural disasters. A quantitative approach will be used to collect data from existing literature. Information will be obtained from various sources including the Insurance Information Institute, Federal Aviation Administration (FAA), National Center for Biotechnology Information (NCBI), Occupational Safety and Health Administration (OSHA), and the Transportation Research Board on UAVs and manned systems to help come up with a solution to these problems
IV
Table of Contents
Page
Graduate Capstone Project Com.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
FFO: Forest Fire Ontology and Reasoning System for Enhanced Alert and Managem...dannyijwest
Forest fires or wildfires pose a serious threat to property, lives, and the environment. Early detection and mitigation of such emergencies, therefore, play an important role in reducing the severity of the impact caused by wildfire. Unfortunately, there is often an improper or delayed mechanism for forest fire detection which leads to destruction and losses. These anomalies in detection can be due to defects in sensors or a lack of proper information interoperability among the sensors deployed in forests. This paper presents a lightweight ontological framework to address these challenges. Interoperability issues are caused due to heterogeneity in technologies used and heterogeneous data created by different sensors. Therefore, through the proposed Forest Fire Detection and Management Ontology (FFO), we introduce a standardized model to share and reuse knowledge and data across different sensors. The proposed ontology is validated using semantic reasoning and query processing. The reasoning and querying processes are performed on real-time data gathered from experiments conducted in a forest and stored as RDF triples based on the design of the ontology. The outcomes of queries and inferences from reasoning demonstrate that FFO is feasible for the early detection of wildfire and facilitates efficient process management subsequent to detection.
This document discusses drone technology and its various environmental applications. It begins with an overview of different types of drones based on size and function. It then outlines many potential uses of drones for environmental monitoring, such as assessing forest health, tracking wildlife, monitoring fires and pollution, and precision agriculture. The document also notes that drones can serve important early warning functions and be deployed quickly in disaster situations. While regulatory concerns exist around privacy and misuse, the document argues that drones can be effective environmental management tools when used for public benefit.
IJRET-V1I1P1 - Forest Fire Detection Based on Wireless Image Processing ISAR Publications
This document proposes an intelligent forest fire detection system using image processing and artificial intelligence techniques. It discusses how traditional fire detection methods are inadequate and outlines the proposed system. The system would use temperature sensors, a GPS module, and small satellites to continuously monitor forests for fires, detect fires rapidly, and transmit location alerts to aid fire station response before fires spread severely. If a temperature threshold is exceeded, the system would transmit an alert along with the fire's GPS location to orbiting small satellites and ultimately to fire stations. The goal is early and accurate fire detection and localization to minimize ecological and economic damage from forest fires.
Dear Writer,Please do a PowerPoint presentation on my .docxrandyburney60861
Dear Writer,
Please do a PowerPoint presentation on my paper that was completed. I am attaching my paper so can have an idea what to do.
Please include speaker notes in the presentation.
Using PowerPoint develop a 22-slide presentation that demonstrates your mastery of the Program Outcomes by discussing the results of your Individual Project. The following slides should be presented:
1) Title Slide
2) Overview
3) Research Problem
4) Research Intent
5) Research Question(s)/Hypothesis
6) Literature Review Key Points
7) Methodology
8) Results
9) Conclusions
10) Recommendations
11) Summary
Chapter 1
Introduction
Preparation and response to natural disasters is a serious logistical challenge. Significant resources are used by intergovernmental, governmental, and non-governmental organizations to prepare and respond to the effects of natural disasters. When a natural disaster occurs, such organizations mobilize resources to respond. Recently, technological advancements in autonomous, semiautonomous, and unmanned vehicles have increased the utility of such vehicles while reducing costs. The increased use of UAVs has created a new dimension to Synthetic Aperture Radar (SAR) operations. In real life, the use of UAVs can be beneficial in cases where rapid decisions are required or the use of manpower is limited (Boehm et al., 2017).
Natural disasters have significantly damaged transportation infrastructure including railways and roads. Additionally, barrier lakes and landslides pose a serious threat to property and life in areas affected. When infrastructure is interrupted with heavy rescue equipment, rescue vehicles, suppliers and rescue teams face challenges to reach disaster-hit areas. As a result, efforts to provide humanitarian aid is hampered (Tatsidou et al., 2019). The traditional approaches of responding to natural disasters are unable to meet the requirements to support the process of disaster decision making. UAVs are well equipped to navigate areas affected by natural disasters and provide humanitarian aid.
This study aims to explore the viability of using the MQ-8B Fire Scout in providing humanitarian aid in areas affected by natural disasters. The document provides a literature review on the use of UAVs in providing humanitarian aid when natural disasters have occurred. The study compares the viability of using MQ-8B to MH-60 in conducting rescue operations in areas affected by disasters.
Significance of the Study
This study was conducted to determine the effectiveness of using UAVs in providing humanitarian aid in areas affected by natural disasters. The study assists in developing general knowledge and bridge the existing gap in providing humanitarian aid using UAVs. The findings of this study increase knowledge of the effectiveness of UAVs in responding to natural disasters and provide more insights on useful methods to respond to affected areas. Ultimately, these insights cou.
This document discusses how wireless sensor networks (WSNs) can help manage disasters. WSNs consist of low-cost, low-power sensor nodes that cooperate to sense the environment and communicate wirelessly. They are proposed as an alternative to satellite monitoring for disaster management. WSNs can provide early warnings of disasters and help search and rescue operations by locating victims. The document outlines key design considerations for using WSNs for disaster detection, monitoring and response, including how to deploy the sensors and which sensing modalities to use based on the type of disaster.
The document describes the development of an autonomous drone integrated with artificial intelligence and object detection capabilities. The drone is designed to follow a person or vehicle and respond during emergencies. It uses a Raspberry Pi computer along with cameras, sensors and a flight controller. The methodology involves building the drone, programming AI algorithms for object detection and tracking, and testing the autonomous functions. The goal is to create a user-friendly drone that can assist during emergencies or be used for surveillance through features like face recognition and response to detected signals.
Radar and optical remote sensing data evaluation and fusion; a case study for...rsmahabir
The recent increase in the availability of spaceborne radar in different wavelengths with multiple polarisations provides new opportunities for land surface analysis. This research effort explored how different radar data, and derived texture values, indepen- dently and in combination with optical imagery influence land cover/use classification accuracies for a study site in Washington, DC, USA. Two spaceborne radar images, Radarsat-2L-band and Palsar C-band quad-polarised radar, were registered with Aster optical data for this study. Traditional methods of classification were applied to various components and combinations of this data set, and overall and class-specific thematic accuracies obtained for comparison. The results for the two despeckled radar data sets were quite different, with Radarsat-2 obtaining an overall accuracy of 59% and Palsar 77%, while that of the optical Aster was 90%. Combining the original radar and a variance texture measure increased the accuracy of Radarsat-2 to 71% but that of Palsar only to 78%. One of the sensor fusions of optical and radar obtained an accuracy of 93%. For this location, radar by itself does not obtain classification accuracies as high as optical data, but fusion with optical imagery provides better overall thematic accuracy than the optical independently, and results in some useful improvements on a class-by-class basis. For those regions with high cloud cover, quad polarisation radar can independently provide viable results but it may be wavelength-dependent.
Unmaned Arieal Vehicle (System Engineering)Omkar Rane
This document is a report submitted by three students - Jayesh Suryawanshi, Omkar Rane, and Chaitanya Deshpande - at MIT Academy of Engineering Alandi, under the guidance of their professor Vinayak Balkrishna Kulkarni. The report discusses unmanned aerial vehicles (UAVs) and focuses on the NETRA drone system developed in India. It provides classifications of UAVs based on size and range/endurance. The report also describes the applications of UAVs, characteristics of the NETRA drone such as its dimensions, payload capacity, and performance specifications, and concludes with references.
Use of drone for field observation - Presentation of applications in emergenc...mapali
Presentation organized at the Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand, in the frame of our GeoHealth Thai Platform project.
This document outlines a proposed design for an SMS-based mobile health emergency geo-location notification system. It begins with an introduction discussing how mHealth applications can help address healthcare challenges such as response time and access. It then reviews existing emergency notification systems like basic 911, enhanced 911, and OnStar, noting limitations such as an inability to handle silent emergency calls. The document proposes designing a system that uses SMS to transmit location data from any mobile phone to ambulances, addressing issues with existing approaches. It presents research questions on creating a system that assists those with speech/hearing impairments and scales across technologies.
The document discusses natural disasters in India and Asia, and how remote sensing and GIS technologies can help mitigate disasters. Space technology, including communication and earth observation satellites, provides disaster warning, relief coordination, and data for pre- and post-disaster planning. GIS integrates spatial and non-spatial data and allows analysis for various stages of disaster management. Remote sensing provides historical data on past hazards to assess risk and aid mitigation planning.
Energy Efficient Animal Sound Recognition Scheme in Wireless Acoustic Sensors...ijwmn
Wireless sensor network (WSN) has proliferated rapidly as a cost-effective solution for data aggregation and measurements under challenging environments. Sensors in WSNs are cheap, powerful, and consume limited energy. The energy consumption is considered to be the dominant concern because it has a direct and significant influence on the application’s lifetime. Recently, the availability of small and inexpensive components such as microphones has promoted the development of wireless acoustic sensor networks (WASNs). Examples of WASN applications are hearing aids, acoustic monitoring, and ambient intelligence. Monitoring animals, especially those that are becoming endangered, can assist with biology researchers’ preservation efforts. In this work, we first focus on exploring the existing methods used to monitor the animal by recognizing their sounds. Then we propose a new energy-efficient approach for identifying animal sounds based on the frequency features extracted from acoustic sensed data. This approach represents a suitable solution that can be implemented and used in various applications. However, the proposed system considers the balance between application efficiency and the sensor’s energy capabilities. The energy savings will be achieved through processing the recognition tasks in each sensor, and the recognition results will be sent to the base station.
ENERGY EFFICIENT ANIMAL SOUND RECOGNITION SCHEME IN WIRELESS ACOUSTIC SENSORS...ijwmn
Wireless sensor network (WSN) has proliferated rapidly as a cost-effective solution for data aggregation and measurements under challenging environments. Sensors in WSNs are cheap, powerful, and consume limited energy. The energy consumption is considered to be the dominant concern because it has a direct and significant influence on the application’s lifetime. Recently, the availability of small and inexpensive components such as microphones has promoted the development of wireless acoustic sensor networks (WASNs). Examples of WASN applications are hearing aids, acoustic monitoring, and ambient intelligence. Monitoring animals, especially those that are becoming endangered, can assist with biology researchers’ preservation efforts. In this work, we first focus on exploring the existing methods used to monitor the animal by recognizing their sounds. Then we propose a new energy-efficient approach for identifying animal sounds based on the frequency features extracted from acoustic sensed data. This approach represents a suitable solution that can be implemented and used in various applications. However, the proposed system considers the balance between application efficiency and the sensor’s energy capabilities. The energy savings will be achieved through processing the recognition tasks in each sensor, and the recognition results will be sent to the base station.
El 12 de mayo de 2017 celebramos en la Fundación Ramó Areces una jornada con IS Global y Unitaid sobre enfermedades transmitidas por vectores, como la malaria, entre otras.
A BRIEF OVERVIEW ON DIFFERENT ANIMAL DETECTION METHODSsipij
Researches based on animal detection plays a very vital role in many real life applications. Applications
which are very important are preventing animal vehicle collision on roads, preventing dangerous animal
intrusion in residential area, knowing locomotive behavioural of targeted animal and many more. There
are limited areas of research related to animal detection. In this paper we will discuss some of these areas
for detection of animals.
2021 Top Ten Cited Article - International Journal of Wireless & Mobile Netwo...ijwmn
This document summarizes a research article that compares the performance of three routing protocols (DSDV, AODV, and DSR) in mobile ad hoc networks. The article uses a network simulator (NS-2) to evaluate the protocols based on metrics like throughput, packet delivery ratio, and average end-to-end delay. The results show that reactive protocols (AODV and DSR) generally outperform the proactive protocol (DSDV) due to lower control overhead and better adaptation to high mobility. DSR achieves the best performance overall by minimizing the number of required floods.
EXPLORING THE EFFECTIVENESS OF THE MQ-8B FIRE SCOUT TO PROVISION H.docxlmelaine
This document is a graduate capstone project submitted by Henry Vascones to Embry-Riddle Aeronautical University in partial fulfillment of a Master of Science in Aeronautics degree in 2020. The project explores the effectiveness of using the MQ-8B Fire Scout unmanned aerial vehicle to provide humanitarian aid in areas affected by natural disasters. The literature review discusses previous research on the origins and applications of UAVs, their use for cargo delivery, and the impacts of weather on UAV operations. The study aims to determine if UAVs like the MQ-8B provide a more expedient and cost-effective means of delivering humanitarian aid compared to traditional manned aircraft like the MH-60 Sea Hawk. It
EXPLORING THE EFFECTIVENESS OF THE MQ-8B FIRE SCOUT TO PROVISION H.docxmecklenburgstrelitzh
EXPLORING THE EFFECTIVENESS OF THE MQ-8B FIRE SCOUT TO PROVISION HUMANITARIAN EFFORTS POST NATURAL DISASTERS
by
Henry Vascones
A Graduate Capstone Project Submitted to the College of Aeronautics,
Department of Graduate Studies, in Partial Fulfillment
of the Requirements for the Degree of
Master of Science in Aeronautics
Embry-Riddle Aeronautical University
Worldwide Campus
March 2020
1
EXPLORING THE EFFECTIVENESS OF THE MQ-8B FIRE SCOUT TO PROVISION HUMANITARIAN EFFORTS POST NATURAL DISASTERS
by
Henry Vascones
This Graduate Capstone Project was prepared under the direction of the candidate’s
Graduate Capstone Project Chair, Dr. Jeremy Hodges,
Worldwide Campus, and has been approved. It was submitted to the
Department of Graduate Studies in partial fulfillment
of the requirements for the degree of
Master of Science in Aeronautics
Graduate Capstone Project:
_________________________________________
Jeremy Hodges, PhD.
Graduate Capstone Project Chair
March 2020
II
Acknowledgements
I would like to thank those who assisted and guided me throughout my time in the master’s program at Embry-Riddle Aeronautical University Worldwide.
III
Abstract
Scholar: Henry Vascones
Title: Exploring the Effectiveness of the MQ-8B Fire Scout to provision Humanitarian Efforts Post Natural Disasters
Institution: Embry-Riddle Aeronautical University
Degree: Master of Science in Aeronautics
Year: 2020
This study will explore the Unmanned Aerial Vehicles (UAVs) have been increasingly used for providing humanitarian aid during natural disasters. This study will evaluate the effectiveness of the Northrop Grumman MQ-8B Fire Scout in providing humanitarian aid after natural disasters have occurred. The ability to utilize the MQ-8B will be analyzed by determining their ability to conduct humanitarian in areas affected by natural disasters and are largely inaccessible using the existing traditional methods. The viability of using UAVs in such operations in terms of abilities and costs will be compared to using response utility trucks. The study will determine the viability of using UAVs in responding to natural disasters while at the same time providing economic benefits. The use of UAVs will be compared to existing response approaches such as the use of emergency response utility vehicles and manned flight. The study will also develop a model to show the costs and benefits of utilizing MQ-8B in responding to natural disasters. A quantitative approach will be used to collect data from existing literature. Information will be obtained from various sources including the Insurance Information Institute, Federal Aviation Administration (FAA), National Center for Biotechnology Information (NCBI), Occupational Safety and Health Administration (OSHA), and the Transportation Research Board on UAVs and manned systems to help come up with a solution to these problems
IV
Table of Contents
Page
Graduate Capstone Project Com.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
1. REVIEW
A survey on technologies for automatic forest fire monitoring,
detection, and fighting using unmanned aerial vehicles and
remote sensing techniques
Chi Yuan, Youmin Zhang, and Zhixiang Liu
Abstract: Because of their rapid maneuverability, extended operational range, and improved personnel safety, unmanned aerial
vehicles (UAVs) with vision-based systems have great potential for monitoring, detecting, and fighting forest fires. Over the last
decade, UAV-based forest fire fighting technology has shown increasing promise. This paper presents a systematic overview of
current progress in this field. First, a brief review of the development and system architecture of UAV systems for forest fire
monitoring, detection, and fighting is provided. Next, technologies related to UAV forest fire monitoring, detection, and fighting
are briefly reviewed, including those associated with fire detection, diagnosis, and prognosis, image vibration elimination, and
cooperative control of UAVs. The final section outlines existing challenges and potential solutions in the application of UAVs to
forest firefighting.
Key words: forest fire, fire monitoring, detection, and fighting, image processing, remote sensing, unmanned aerial vehicles.
Résumé : Étant donné qu'ils sont rapidement manœuvrables, qu'ils ont un grand rayon d'action opérationnel et qu'ils offrent
une meilleure sécurité pour le personnel, les véhicules aériens sans pilote (UAV) équipés de systèmes de vision ont un potentiel
énorme pour surveiller, détecter et combattre les feux de forêt. Au cours de la dernière décennie, la technologie de lutte contre
les feux de forêt qui utilise des UAV s'est avérée de plus en plus prometteuse. Cet article présente un aperçu complet des progrès
actuels dans ce domaine. D'abord, une brève revue du développement et de l'architecture des systèmes de surveillance, de
détection et de combat des feux de forêt qui utilisent des UAV est présentée. Ensuite, les technologies reliées à la surveillance,
à la détection et au combat des feux de forêt sont brièvement passées en revue, incluant celles qui sont associées à la détection,
au diagnostic et au pronostic des feux, à l'élimination de la vibration des images et au contrôle coopératif des UAV. La dernière
section décrit les défis actuels et les solutions potentielles liés à l'utilisation des UAV dans la lutte contre les feux de forêt. [Traduit
par la Rédaction]
Mots-clés : feu de forêt, surveillance, détection et combat des feux, traitement des images, télédétection, véhicules aériens sans
pilote.
1. Introduction
Forests play a number of important roles in nature. They can
purify water, stabilize the soil, cycle nutrients, moderate climate,
and store carbon. They also provide habitats for wildlife and
nurture environments rich in biological diversity. Economically,
forests sustain the forest products industry, which supports hun-
dreds of thousands of jobs and contributes billions of dollars to a
country's economic wealth.
Unfortunately, every year, millions of hectares of forest are
destroyed by forest fires and hundreds of millions of dollars are
spent to extinguish these fires (McAlpine and Wotton 1993;
Martínez-de Dios et al. 2008). Although wildfires do help to form
new forests, it is difficult to make sure that uncontrolled fires do
not spread into places that may threaten sensitive ecological sys-
tems or human infrastructure and lives (Hirsch and Fuglem 2006).
Forest fires have become a serious natural danger (Mandallaz and
Ye 1997; Kolarić et al. 2008); therefore, fighting forest fires is con-
sidered to be one of the most important roles in the protection
and preservation of natural resources (Gonzalez et al. 2006;
Kolarić et al. 2008).
Early detection and suppression of forest fires are crucial to
minimizing the destruction that the fires may cause due to their
rapid convection propagation and long combustion cycle (Lin
et al. 2014). Massive efforts have been put into monitoring, de-
tecting, and rapidly extinguishing forest fires before they be-
come too large. Traditional forest fire monitoring and detection
methods employ either mechanical devices or humans to monitor
the surroundings, but these methods can be both dangerous
and costly in terms of the required human resources (Vipin
2012).
Remote sensing has become one of the most frequently utilized
tools for effective forest survey and management (Leckie 1990;
Chisholm et al. 2013). Rapid advances in electronics, computer
science, and digital camera technologies have allowed computer
vision based remote sensing systems to provide a promising sub-
stitute for conventional forest fire monitoring and detection sys-
tems (Vipin 2012). Current remote sensing approaches to forest
fire monitoring and detection can be grouped into three catego-
ries: ground-based systems, manned aerial vehicle based systems,
and satellite-based systems (Den Breejen et al. 1998). However,
each of these systems presents different technological and practi-
cal problems. Ground measurement equipment may suffer from
Received 13 August 2014. Accepted 4 March 2015.
C. Yuan, Y.M. Zhang, and Z.X. Liu. Department of Mechanical and Industrial Engineering, Concordia University, 1455 de Maisonneuve Blvd. West,
Montreal, QC H3G 1M8, Canada.
Corresponding author: Youmin Zhang (e-mail: youmin.zhang@concordia.ca).
783
Can. J. For. Res. 45: 783–792 (2015) dx.doi.org/10.1139/cjfr-2014-0347 Published at www.nrcresearchpress.com/cjfr on 12 March 2015.
Can.
J.
For.
Res.
Downloaded
from
www.nrcresearchpress.com
by
YALE
UNIV
on
07/19/15
For
personal
use
only.
2. limited surveillance ranges. Satellite systems are less flexible in
their path planning and technology updates, and their temporal
and spatial resolution may be too low for detailed data capture
and operational forest fire fighting (Olsson et al. 2005). Manned
aerial vehicles are typically large and expensive. Moreover, the life
of the pilot can be potentially threatened by hazardous environ-
ments and operator fatigue (Beard et al. 2006).
Unmanned aerial vehicles (UAVs) with computer vision based
remote sensing systems are an increasingly realistic option, pro-
viding rapid, mobile, and low-cost alternatives for monitoring,
detecting, and even fighting forest fires. The integration of UAVs
with remote sensing techniques are also able to meet the critical
spatial, spectral, and temporal resolution requirements (Everaerts
2008; Berni et al. 2009), offering the potential to serve as a power-
ful supplement to existing methods (Olsson et al. 2005; Hunt et al.
2008). In addition, UAVs allow the execution of long-term, monot-
onous, and repeated tasks beyond human capabilities. This has
led to increased worldwide attention to UAV forest fire applica-
tions in recent years (Ambrosia and Zajkowski 2015; Merino et al.
2015; Shahbazi et al. 2014; Sharifi et al. 2014; Bosch et al. 2013;
Merino et al. 2012, etc.).
This paper reviews existing UAV-based forest fire monitoring,
detection, and fighting technologies, with an emphasis on moni-
toring and detection techniques as forest fire fighting using UAVs
has not yet received much attention in the literature. Section 1
assesses the advantages of applying UAVs in forest fire fighting.
Section 2 reviews several monitoring, detection, and fighting sys-
tems and outlines a general description of these kinds of systems.
Section 3 focuses on key technologies that can be applied to auto-
matic forest fire monitoring, detection, and fighting, as well as
summarizing some of the more challenging research and devel-
opment issues in this field. Finally, section 4 concludes with sug-
gestions of possible future directions for computer vision based
forest fire fighting through the use of UAVs.
2. General description of UAV-based forest fire
fighting systems
2.1. Review of the development of UAV-based forest fire
monitoring, detection, and fighting systems
Recent decades have seen tremendous progress in the field
of automatic forest fire fighting technologies (Ambrosia and
Zajkowski 2015; Merino et al. 2015). Despite this, few research
papers until now have considered the application of UAVs in this
field. Most research has been carried out in the United States and
Europe. Table 1 provides a brief overview of existing UAV-based
forest fire fighting systems.
The earliest application of UAVs for gathering information on
forest fires can be traced back to 1961 by the United States Forest
Services (USFS) Forest Fire Laboratory (Wilson and Davis 1988). In
1996, a “Firebird 2001” UAV with a visual camera and an onboard
imaging system was used for forest fire imaging in Missoula, Mon-
tana (Ambrosia and Zajkowski 2015). Later, between 2003 and
2010, a Wildfire Research and Applications Partnership (WRAP)
project was conducted by the USFS and National Aeronautics
and Space Administration (NASA), which was intended to aug-
ment underserved forest fire applications (Ambrosia et al. 2011;
Tranchitella et al. 2007). In 2006, the NASA “Altair” and the
“Ikhana” (Predator-B) UAVs demonstrated their capabilities in
supporting near-real-time forest fire imaging missions in the
western United States (Ambrosia et al. 2011). In 2011, with the
support of the West Virginia Department of Forestry (WVDF) and
NASA, a research team from the University of Cincinnati used the
Marcus “Zephyr” UAV system to test the capability of its forest fire
detection systems (Charvat et al. 2012). Moreover, in the First Re-
sponse Experiment (FiRE) project (Ambrosia 2002), wildfire detec-
tion was demonstrated by an unmanned aerial system (UAS),
verifying the efficacy of using UAVs for real-time disaster data
collection. Overall data collection, telemetry, geoprocessing, and
delivery were achieved within 15 min using this system.
While the FiRE project used a single, powerful UAV with sophis-
ticated sensors, another project undertaken in Europe used a
team of low-cost UAVs with onboard visual and infrared cameras
instead. In this project, UAVs worked as local sensors supplying
images and data at close ranges. Experiments using multiple
UAVs for surveillance, detection, localization, confirmation, and
measurement of forest fires have also been carried out (Ollero
et al. 2005; Martínez-de Dios et al. 2007; Merino et al. 2012, 2015).
The first regulated use of a UAV in fire service was done in Hun-
gary in 2004, testing the system's ability to detect forest fires
(Restas 2006). Additionally, Pastor et al. (2011) validated the use of
a Sky-eye UAV system to detect forest fires in Spain. In 2011, two
UAVs with visual and infrared cameras were deployed to evaluate
their ability to detect and localize two real fire scenarios in the
Netherlands (Van Persie et al. 2011).
In addition to the practical testing of UAVs, there has also been
a significant amount of simulated research on their fire monitor-
ing and detection abilities. Casbeer et al. (2006) investigated the
practicality of using a team of low-altitude, low-endurance UAVs
Table 1. The characteristics of reviewed UAVs forest fire monitoring, detection, and fighting systems.
Field test type References UAV class Onboard cameras (resolution) Engine power Payload capacity (kg)
Near operational Ambrosia et al. 2003 1 fixed-wing 1 thermal (720×640) Fuel 340
Operational Ambrosia et al. 2011 1 fixed-wing 4 mid-IR (720×640) Fuel >1088
Near operational Martínez-de Dios et al. 2005 2 rotary-wing 1 visual (320×240) Fuel 3.5
1 airship 1 IR (160×120) Electric
Operational Van Persie et al. 2011 1 fixed-wing 1 visual Fuel —
1 rotary-wing 1 IR
Near operational Tranchitella et al. 2007 1 fixed-wing 1 visual Fuel <34
1 IR
Near operational Ambrosia and Zajkowski 2015 2 fixed-wing 1 visual Fuel 25
1 IR Fuel 250
Near operational Esposito et al. 2007 2 fixed-wing 1 visual (1920×1080) Fuel 250
1 thermal (160×120)
1 NIR (752×582)
1 VNIR (128×128) Electric <2.6
Near operational Jones 2003; Jones et al. 2006 1 fixed-wing 1 visual (720×480) Gas 0.68
Near operational Pastor et al. 2011 1 rotary-wing 2 visual (4000×2656; 2048×1536) Fuel 907
1 thermal (320×240)
Near operational Restas 2006 1 fixed-wing 1 visual Electric —
Near operational Charvat et al. 2012 1 fixed-wing 1 visual (656×492) Electric 5.5
Note: IR, infrared; NIR, near-IR; VNIR, visible–NIR; —, not mentioned.
784 Can. J. For. Res. Vol. 45, 2015
Published by NRC Research Press
Can.
J.
For.
Res.
Downloaded
from
www.nrcresearchpress.com
by
YALE
UNIV
on
07/19/15
For
personal
use
only.
3. for cooperative surveillance and tracking of the propagation of
large forest fires. A numerical propagation model for forest fire
monitoring and detection was verified in simulation using a dy-
namic UAV model with six degrees of freedom. Despite this, ver-
ification of UAV effectiveness in actual forest fire fighting
activities is still required in future investigations. One recent test
saw a team of Lockheed Martin and Kaman UAVs successfully
demonstrate their ability to aid in fire extinguishing operations
(UAS VISION 2014), although technical details on this operation
have not yet been made available.
Although the aforementioned research demonstrates the pos-
sibility of using UAVs to detect and even extinguish forest fires,
development of such systems, including related hardware, soft-
ware, and application strategies, is still minimal. Further investi-
gation is needed on all aspects of their use, including suitable
system platforms, remote sensing payloads and sensors, and algo-
rithms for autonomous guidance, navigation, and control (GNC),
as well as on using UAVs in combination with other remote sens-
ing techniques. It is this urgent need that necessitates the writing
of this survey paper, which aims to motivate further research and
development in this important field.
2.2. General system design architecture and requirements
of UAV-based automatic forest fire monitoring, detection,
and fighting systems
Based on the above review of the existing literature and re-
search, the basic elements of a general UAV-based forest fire sur-
veillance and fighting system can be illustrated in Fig. 1, which
covers the functions of monitoring (finding a potential fire), de-
tection (triggering an alarm to inform firefighting operators or
initialize further diagnosis and prognosis), diagnosis (determin-
ing the fire's location and extent and tracking its evolution), and
prognosis (predicting the future evolution of the fire based on
real-time wind and firefighting conditions). These functions are
conducted using either a single UAV or a team of UAVs (with
different kinds of sensors) along with a central ground station.
The objectives are to use UAVs to track fires, predict their evolu-
tion, and provide real-time information to human firefighters and
(or) to execute firefighting with UAVs.
Terminologies for forest fire monitoring and detection are not
yet clearly defined, with even different definitions within the lit-
erature. To avoid confusion, definitions for forest fire monitoring,
detection, diagnosis, and prognosis are provided in this review by
following established traditions in the more general field of con-
dition monitoring, fault detection, and diagnosis (Zhang and Jiang
2008).
Forest fire monitoring is defined as monitoring for the possible
occurrence of fire before it has occurred, whereas fire detection is
the detection of an actual fire in progress. Because smaller fires
are easier to control and extinguish, detection must be done as
fast and as early as possible. Fire diagnosis aims to find detailed
information about the fire such as its location and extent. Fire
prognosis aims to track and predict the evolution of a fire in real
time using information provided by the onboard remote monitor-
ing sensors installed on UAVs.
To achieve these goals, UAV-based forest fire monitoring,
detection, diagnosis, and prognosis systems typically includes
the following: (i) various frames and sensors, including global
positioning system (GPS) receivers, inertial measurement units
(IMUs), and cameras, all of which aid in firefighting; (ii) specific
algorithms and strategies for fire monitoring, detection, diagno-
sis, and prognosis; (iii) GNC systems for both single and multiple
UAVs; (iv) cooperative localization, deployment, and control sys-
tems for UAV fleets to optimally cover fire areas (such systems are
based on the real-time information provided by the onboard vi-
sual (for daytime) and infrared (for both nighttime and daytime)
monitoring sensors and their associated image and (or) signal
processing algorithms); and (v) a dedicated ground station that
includes equipment for communication, ground computation, vi-
Fig. 1. Conceptual UAV-based forest fire monitoring, detection, and fighting system.
Yuan et al. 785
Published by NRC Research Press
Can.
J.
For.
Res.
Downloaded
from
www.nrcresearchpress.com
by
YALE
UNIV
on
07/19/15
For
personal
use
only.
4. sualization of fire detection, tracking, and prediction with auto-
matic fire warning or alarm, as well as all equipment necessary for
the safe and efficient operation of UAVs.
UAV forest fire fighting missions can generally be broken down
into three stages: fire search, fire confirmation, and fire observa-
tion (Merino et al. 2010). In the fire search stage, the ground con-
trol station divides the mission for each UAV according to the
characteristics of terrain and the capabilities of individual UAVs,
including their onboard sensors. Following this, either a single
UAV or a fleet of homogeneous and (or) heterogeneous (fixed
wing and rotary wing) search UAVs (Sharifi et al. 2014, 2015b) are
deployed to patrol the surveillance region along respective pre-
planned paths. Meanwhile, fire segmentation methods are ap-
plied on each UAV to automatically identify fire by employing the
fire detection sensors, including visual and infrared cameras. The
fire confirmation stage begins after a fire is detected. The ground
control station commands the search UAV(s) to hover at a safe
distance, while other UAVs are also sent to the detected fire loca-
tion to make confirmation if needed. The fire observation stage
starts if the fire is confirmed to be real; otherwise, the fire search
stage is resumed. In the fire observation stage, UAVs are com-
manded to continuously obtain information about the fire. This
requires multiple synchronous images be obtained by the UAVs
from different points of view. These are then delivered to ground
operators and (or) firefighting managers or service UAVs to better
guide firefighting efforts.
3. Technologies used in automatic forest fire monitoring,
detection, and fighting UAVs
The advantages of vision-based techniques, including the cap-
ture of intuitive, informative, and reliable, real-time data, a large
detection range, and convenient verification and recording abili-
ties, have made them a major research topic in the field of forest
fire monitoring and detection (Li et al. 2013). As shown in Table 2,
the past decade has seen a series of research studies conducted
using vision-based UAV systems for forest fire monitoring and
detection in near-operational field tests, though actual firefight-
ing tests remain scarce.
In addition, there are a number of other studies of other plat-
forms and offline videos used to monitor and detect fires, as illus-
trated in Tables 3 and 4. Although these methodologies were not
originally intended for UAV application and do not consider prob-
lems associated with UAVs, e.g., image vibrations induced by their
flight and cooperative control of multiple UAVs, they nevertheless
offer some potentially transferable insights into UAV-based forest
fire fighting applications due to their common issues in fire de-
tection using vision-based technologies. To reduce the cost of de-
vices and personnel and save experimental time, the effectiveness
of various fire detection approaches are normally tested and ver-
ified in advance based on forest fire videos. Only afterwards are
the specific practical issues (image vibration, etc.) affecting on-
board images considered and solved. Because of this, it is highly
possible that these detection methodologies can also be applica-
ble for UAVs conducting operational and (or) near-operational
forest fire detection missions.
3.1. Vision-based technologies for automatic forest fire
detection
Over the last decade, image processing techniques have become
widely used for forest fire detection. Based on the spectral range of
the camera used, vision-based fire detection technologies can gen-
erally be classified as either visual or infrared fire detection sys-
tems (Çetin et al. 2013). Fire detection can be divided into either
flame detection or smoke detection in terms of the actual object
being detected (Li et al. 2013). Most importantly, the color, motion,
and geometry of the fire constitute the three dominant character-
istic features of fire detection (Çelik et al. 2007b).
3.1.1. Fire detection with visual images
In Tables 3 and 4, color, motion, and geometry of the detected
fire are commonly used in the existing investigation, with color
being used mostly in segmenting the fire areas (Rudz et al. 2009;
Mahdipour and Dadkhah 2012). As outlined in Table 3, consider-
able effort has gone into the development of offline video-based
fire detection. Chen et al. (2004) use color and motion features
based on an RGB (red, green, blue) model to extract real fire
(flame) and smoke in video sequences. Töreyin et al. (2005) pro-
pose a real-time algorithm combining motion and color clues
with fire flicker analysis on wavelet domain to detect fire in video
sequences. Töreyin et al. (2006a) combine a generic color model
based on RGB color space, motion information, and Markov pro-
cess enhanced fire flicker analysis to create an overall fire detec-
tion system. Later, the same fire detection strategy is employed to
detect possible smoke samples, which is used as an early alarm for
fire detection (Töreyin et al. 2006b). In Çelik and Demirel (2009), a
rule-based generic color model for flame pixel classification is
Table 2. Vision-based techniques for forest fire monitoring, detection, and fighting using UAVs in near-operational field.
Detection
objects
Detection method
Spectral
bands Resolution
Used
features Flame Smoke Geolocation
Propagation
prediction
Image
stabilization References
Georeferenced
uncertainty mosaic
IR 320×240 Color 公 × 公 公 公 Bradley and Taylor 2011
Statistical data
fusion
Visual 752×582
Color 公 × 公 公 公 Martínez-de Dios et al. 2011
Mid-IR 256×256
Training method IR 160×120 Color 公 × × × 公 Martínez-de Dios and Ollero
2004
Training method Visual 320×240
Color 公 × 公 × 公 Merino et al. 2005, 2006
Far-IR —
Training method Visual 320×240
Color 公 × 公 公 公 Merino et al. 2010, 2012
Far-IR —
— Visual 720×640
Color 公 × 公 × —
Ambrosia 2002; Ambrosia et al.
2003
IR —
Genetic algorithm IR 320×240 Color 公 × × × × Kontitsis et al. 2004
Training method Visual 752×582
Color 公 × 公 × —
Martínez-de Dios et al. 2005;
Martínez-de Dios et al. 2006
IR 160×120
— Visual —
Color 公 × 公 × —
Ollero et al. 2005; Ollero and Merino
2006; Maza et al. 2010, 2011
IR —
Note: IR, infrared; 公, considered; ×, not considered; —, not mentioned.
786 Can. J. For. Res. Vol. 45, 2015
Published by NRC Research Press
Can.
J.
For.
Res.
Downloaded
from
www.nrcresearchpress.com
by
YALE
UNIV
on
07/19/15
For
personal
use
only.
5. presented, with experimental results showing significant im-
provement in detection performance. In Günay et al. (2009), an
approach based on four subalgorithms for wildfire detection at
night is addressed and an adaptive active fusion method is ad-
opted to linearly combine decisions from subalgorithms. Finally,
Günay et al. (2012) develop an entropy-functional-based online
adaptive decision fusion framework, the application of which is to
detect the presence of wildfires in video.
Most of the above-mentioned research focuses on the detection
of forest fires by flame, whereas smoke is also an important fea-
ture in the early and precise detection of forest fires. Tables 3
and 4 show some studies that focus on smoke detection. Chen
et al. (2006) continue their work from Chen et al. (2004), proposing
a chromaticity-based static decision rule and a diffusion-based
dynamic characteristic decision rule for smoke pixel judgment.
Experimental results indicate that this approach can provide an
authentic and cost-effective solution for smoke detection. In Yu
et al. (2009), a real-time smoke classification method using texture
analysis is developed, and a back-propagation neural network is
adopted as a discriminating model. Experiments prove that the
proposed method is capable of differentiating smoke and non-
smoke videos with both a quick fire alarm and low false alarm
rate. Yuan (2008) designs a smoke detection system utilizing an
accumulative motion model based on integral images by fast es-
timation of the smoke motion orientation to reduce the rate of
false alarms, as the estimation accuracy can affect subsequent
critical decisions. Hence, smoke orientation is accumulated over
time to compensate for inaccuracy. A fuzzy logic method is em-
ployed to detect smoke in a real-time alarm system in Ho and Kuo
(2009). Spectral, spatial, and temporal features are used for ex-
tracting smoke and for helping with the validation of smoke.
Experimental results show that smoke can be successfully de-
tected in different circumstances (indoor, outdoor, simple or com-
plex background image, etc.). Although the results are promising,
further development is still needed to integrate such findings
with existing surveillance systems and implement them in actual
operations. An approach using static and dynamic characteristic
analysis for forest fire smoke detection is proposed by Surit and
Chatwiriya (2011). Zhang et al. (2007) present an Otsu-based
method to detect fire and smoke while segmenting fire and smoke
together from the background. In Yu et al. (2010), a color-based
decision rule and an optical flow algorithm are adopted for ex-
tracting the color and motion features of smoke, with experimen-
tal results showing significantly improved accuracy of video
smoke detection.
Although various forest fire and smoke detection methods have
been developed experimentally, at present, only a few have been
carried out in near-operational environments (see Table 2). Most
of these have been conducted by a research team from the Uni-
versity of Seville in Spain. These experiments use multiple UAVs,
with color information chosen as the key fire detection feature.
Many researchers in recent years have adopted intelligent
methods to reduce false alarm rates and the cost of sensors. As
illustrated in Table 3, the adopted algorithms (Çelik et al. 2007b;
Hou et al. 2009; Yu et al. 2010; Ham et al. 2010, etc.) include artifi-
cial neural networks, fuzzy logic, and fuzzy neural networks. Ex-
perimental results show that although these approaches can
effectively detect fires, the majority of them have not been tested
on UAVs or in real forest fire scenarios.
3.1.2. Fire detection with infrared images
As infrared images can be obtained in conditions of either weak
or no light and smoke is transparent in infrared images, it is
therefore practical to employ infrared cameras for monitoring
and detecting fires in both daytime and nighttime conditions.
Tables 2 and 4 summarize research studies done using infrared
cameras for fire detection. Merino et al. (2005) and Martínez-de
Dios et al. (2005) make use of infrared images to generate binary
images containing fire regions while reducing false alarm rates by
using a training-based threshold selection method (Martínez-de
Dios and Ollero 2004) as the appearance of a fire is a high intensity
area in infrared images. In Bosch et al. (2013), decision fusion is
applied in infrared imaging for judging the occurrence of forest
fires, allowing a variety of anticipated features of the fire to be
obtained, including short-term persistence and long-term in-
crease. Pastor et al. (2006) develop an infrared image processing
Table 3. Offline video forest fire monitoring and detection methodologies using visual cameras.
Adopted features Detection objects
Detection method Resolution Color Motion Geometry Flame Smoke Referencesa
Statistic method 320×240
公 × × 公 × Cho et al. 2008
400×255
Fuzzy logic 256×256 公 × × 公 × Çelik et al. 2007b
Support vector machine — 公 公 公 公 × Zhao et al. 2011
Fuzzy logic 320×240 公 公 公 × 公 Ho and Kuo 2009
Wavelet analysis 320×240 公 公 公 公 × Töreyin et al. 2006a
Computer vision 320×240 公 公 × 公 × Qi and Ebert 2009
Wavelet analysis 320×240 公 公 × 公 × Töreyin et al. 2005
Rule-based video processing — 公 公 × 公 公 Chen et al. 2004
Fourier transform — 公 公 × 公 × Jin and Zhang 2009
Bayes and fuzzy C-means — 公 公 × 公 × Duong and Tuan 2009
Adaptable updating target
extraction
— 公 公 × 公 × Hou et al. 2010
Histogram-based method — 公 公 × 公 × Philips et al. 2002
Fuzzy-neural network — 公 公 × 公 × Hou et al. 2009
Statistical method 176×144 公 × × 公 × Çelik et al. 2007a
Fuzzy finite automata — 公 公 × 公 × Ham et al. 2010
Gaussian mixture model 320×240 公 公 × 公 × Chen et al. 2010
Histogram back projection — 公 × × 公 × Wirth and Zaremba 2010
Wavelet analysis — 公 公 × × 公 Töreyin et al. 2006b
Adaptive decision fusion — 公 公 × × 公 Günay et al. 2012
Accumulative motion model — × 公 × × 公 Yuan 2008
Image processing method — 公 公 × × 公 Surit and Chatwiriya 2011
Neural network 320×240 公 公 × × 公 Yu et al. 2010
Note: 公, considered; ×, not considered; —, not mentioned.
aNone of these referenced studies addressed issues about fire propagation prediction, geolocation, or image vibration elimination.
Yuan et al. 787
Published by NRC Research Press
Can.
J.
For.
Res.
Downloaded
from
www.nrcresearchpress.com
by
YALE
UNIV
on
07/19/15
For
personal
use
only.
6. method based on linear transformations for precisely calculating
the rate of spread (ROS) of forest fires, while a threshold value
searching criterion is utilized to identify the flame front position.
Ononye et al. (2007) depict a multispectral infrared image process-
ing method. Based on a sequence of image processing tools and
the dynamic data-driven application system (DDDAS) concept, the
proposed method is capable of automatically attaining the forest
fire perimeter, active fire line, and fire propagation tendency. A
multiple artificial neural networks (ANNs) model for an infrared
flame detection system is devised in Huseynov et al. (2007). The
experimental results demonstrate that the proposed method can
contribute to faster training and improvement of the classifica-
tion success rate.
One problem associated with processing images from infrared
cameras is that miniaturized cameras still have low sensitivity
(Martínez-de Dios et al. 2005). This requires an increase in detector
exposure periods to produce higher quality images. In addition,
the high-frequency vibration of UAVs can cause blurring in the
images, which remains a major challenge in their development.
3.1.3. Fusion of visual and infrared images
To improve the accuracy, reliability, and robustness of fire de-
tection algorithms and to reduce the rate of false alarms, visual
and infrared images can be fused together, generally through the
use of fuzzy logic, intelligent, probability, and statistical methods
(as illustrated in Tables 2 and 4).
Arrue et al. (2000) have designed a false alarm reduction system
comprised of infrared image processing, ANNs, and fuzzy logic. In
this system, matching the information excessiveness of visual
and infrared images is utilized to confirm forest fire alarms.
Martínez-de Dios et al. (2006) integrate infrared and visual cam-
eras for fire front parameter measurement by taking advantage of
both visual and infrared image processing techniques, whereas
experimental validations are only conducted in a laboratory. In
addition, Martínez-de Dios et al. (2008) describe a forest fire per-
ception system built on computer vision techniques. Visual and
infrared images are combined to calculate a three-dimensional
(3D) fire perception model, allowing the fire evolution to be visu-
alized by remote computer systems.
Although different approaches to image information fusion
have been addressed in the existing research, the issue of how to
optimize the number of features that are utilized in fire detection
is still a challenge. Solving this issue would not only reduce the
computation burden of onboard computers, but also decrease
both the cost of the hardware and the rate of false alarms.
3.2. Vision-based technologies for automatic forest fire
diagnosis and prognosis
The most common mission undertaken in UAV-based forest fire
diagnosis is to produce fire geolocated alarms, whereas the most
common mission in fire prognosis is to predict the fire's propaga-
tion, which is sufficient to meet the requirements of operational
forest firefighting.
3.2.1. Geolocation of fire — fire diagnosis
After the fire is detected and its information is extracted from
images, the segmented fire area requires geolocation information
for measuring its geometrical features such as fire front location,
fire site width and perimeter, flame length and height, inclina-
tion angle, coordinates of burnt areas, and location of hotspots
(Martínez-de Dios et al. 2011; Casbeer et al. 2005, 2006). Such mea-
surements, together with the projection of gathered images onto
a terrain map, are essential for planning effective and efficient
firefighting strategies (Shahbazi et al. 2014).
UAVs are currently being employed in the automatic geoloca-
tion of forest fires due to their wide coverage, spectral resolution,
and safety (Rossi et al. 2010). Existing automatic geolocation stud-
ies have mainly been conducted by adopting the multisensory
Table
4.
Fire
monitoring,
detection,
and
fighting
methodologies
using
visual
and
infrared
cameras.
Validations
Adopted
features
Detection
objects
Detection
method
Spectral
bands
Resolution
Outdoor
Indoor
Offline
Color
Motion
Geometry
Flame
Smoke
Propagation
prediction
Geolocation
References
a
Training
method
Visual
752×582
×
公
×
公
×
×
公
×
公
公
Martínez-de
Dios
et
al.
2006
Mid-IR
256×256
Training
method
Visual
—
公
×
×
公
×
×
公
×
公
公
Martínez-de
Dios
et
al.
2008
Mid-IR
—
Images
matching
Visual
—
公
×
×
公
公
公
×
公
×
公
Ollero
et
al.
1999;
Arrue
et
al.
2000
IR
—
Data
fusion
Visual
—
×
公
×
公
—
—
公
公
×
×
Bosch
et
al.
2013
IR
—
Neural
networks
IR
—
×
公
×
公
公
×
公
×
×
×
Huseynov
et
al.
2007
Dynamic
data
driven
Multispectral
IR
—
×
×
公
公
×
公
公
×
公
×
Ononye
et
al.
2007
Training
method
Visual
—
公
×
×
公
×
公
公
×
公
公
Martínez-de
Dios
et
al.
(2008)
IR
—
Note:
IR,
infrared;
公,
considered;
×,
not
considered;
—,
not
mentioned.
a
None
of
these
referenced
studies
addressed
issues
about
image
vibration
elimination.
788 Can. J. For. Res. Vol. 45, 2015
Published by NRC Research Press
Can.
J.
For.
Res.
Downloaded
from
www.nrcresearchpress.com
by
YALE
UNIV
on
07/19/15
For
personal
use
only.
7. fusion technique for providing the information before, during,
and after wildfires, using onboard vision and telemetry sensors
and navigation units (GPS, IMUs) (Shahbazi et al. 2014). Mean-
while, the location of the UAVs themselves is known through the
use of GPS. The orientation of the UAV's camera is computed by
composing the orientation angles of the pan and tilt system with
the orientation angles of the UAV airframe, which is estimated by
IMUs and compasses. If a camera is calibrated and a digital eleva-
tion map is available, it is possible to obtain the georeferenced
location of an object in the common global coordinate frame from
its position on the image plane (Merino et al. 2006).
The past decade has seen considerable effort in the field of
geolocation. Direct geolocation via navigation information is the
most time-effective way to automatically conduct the mission
(Shahbazi et al. 2014). Besides this, generating real-time orthorec-
tification and a geocoded mosaic is of great significance for im-
proving the accuracy of automatic geolocation in UAV systems
(Li et al. 2011; Chou et al. 2010; Wu and Zhou 2006; Qian et al. 2012).
Orthorectified mosaics of the fire area can be combined with the
geolocated UAV images to produce geospatial data for quick ac-
tion on time-critical fire events (Zhou et al. 2005; Wu and Zhou
2006; Zhou 2009; Van Persie et al. 2011; Shahbazi et al. 2014). The
position, ROS, and height of the fire front can also be measured by
adopting a 3D vision-based instrumentation with visual cameras
and navigation sensors (Rossi et al. 2010).
In practice, numerous potential sources of error may result in
inaccurate 3D information. For example, smoke may occlude the
fire scene, while the accuracy of navigation sensors and platforms
may affect the measurement of forest fires. As proposed in
Mostafa and Schwarz (2001) and Habib et al. (2006), proper calibra-
tion of the platform and camera orientation is of great signifi-
cance for improving the accuracy of geolocation. Digital elevation
models (DEMs), ground control points (GCPs), and planimetric
and topographic maps are all potential choices for this applica-
tion (Shahbazi et al. 2014). Computing the errors of object local-
ization by considering the terrain map and errors in the position
and orientation of cameras is described in Martínez-de Dios et al.
(2011). In addition, to decrease the effect of smoke, multiple het-
erogeneous UAVs mounted with different kinds of sensors (visual
and infrared cameras) are deployed for geolocating in the field
experiment (Merino et al. 2006; Martínez-de Dios et al. 2011).
Unfortunately, existing research studies to date have only been
carried out in either laboratory or near-operational environments
(Rossi et al. 2010; Merino et al. 2006; Martínez-de Dios et al. 2011).
Research and testing in operational firefighting conditions is still
needed for further investigation.
3.2.2. Propagation prediction — fire prognosis
Predicting the behavior of fire propagation is essential for de-
veloping quick, effective, and advanced forest fire fighting strate-
gies. Hence, it is important to know the evolution of the fire front,
as well as other properties of the fire such as ROS, fire front
location, flame height, flame inclination angle, fire site width, etc.
ROS, in particular, is one of the most significant parameters in
the characterization of forest fire behavior, as it is directly associ-
ated with fire intensity (Byram 1959) and flame front geometry
(Anderson 1968), two key indicators of the danger levels of fire
propagation.
Numerous new methodologies have been developed in recent
decades based on discrete or continuous fire monitoring with the
aid of visual and infrared cameras (Pastor et al. 2006). Infrared
image processing, visual and infrared image fusion, and stereovi-
sion are all applied in the literature to better provide fire propa-
gation information (Pastor et al. 2006; Martínez-de Dios et al.
2006, 2008; Ononye et al. 2007; Rossi et al. 2010).
In addition, many intelligent algorithms have also been adopted
(Jaber et al. 2001; Alonso-Betanzos et al. 2003; HomChaudhuri and
Kumar 2010), including genetic algorithms, expert-, knowledge-,
and rule-based systems, and ANNs. These approaches are exten-
sively applied in forest fire prognosis because of their high accu-
racy, multifunctionality, and reliability in different surroundings.
Karafyllidis (2004) adopts cellular automation for predicting the
spread of forest fires. A dedicated parallel processor that can work
as part of a decision support system is developed by a genetic
algorithm. Olivas (2003) represents a knowledge-based system
(KBS) for forest fire prediction and firefighting decision support.
To estimate the extent and spread of wildfires, a fuzzy segmenta-
tion algorithm that can map fire extent, active fire front, and hot
burn scar is proposed in Li et al. (2007). In Fowler et al. (2009), a
suitable fuzzy rule based system is employed to fulfill forest
fire size prediction, with both these fuzzy rules and membership
functions automatically evolved through genetic algorithms. In
Wendt (2008), a robust data storage and retrieval system is devised
to calibrate input variables for data-driven forest fire prediction.
To improve the effectiveness and efficiency of forest fire spread
predictions, an enhanced prediction scheme that uses recent fire
history and optimization techniques is described in Abdalhaq
et al. (2005), while Denham et al. (2008) address a dynamic data
driven genetic algorithm.
3.3. Image vibration elimination
One practical issue in image capture and processing is that
inescapable turbulences and vibrations of UAVs during flight may
change a camera's position and result in frequent image motion
and blurring images affecting detection results and even leading
to detection failure. To decrease the failure alarm rate, image
antivibration should be considered. Electromechanic systems are
helpful in eliminating such vibrations, but these systems are usu-
ally heavy and expensive and produce a residual vibration of their
own (Merino et al. 2010).
Although smaller and less expensive gimbal systems have re-
cently been developed, a simple and low-cost image processing
approach that can be used for image motion calculation and elim-
ination is still in high demand. Existing research on this topic is
still scarce, with Ferruz and Ollero (2000), Ollero et al. (2004), and
Merino et al. (2010) constituting the few identified investigations
of this problem.
3.4. Cooperative control of UAVs in forest fire monitoring,
detection, and fighting
Because forest fires are highly sophisticated and nonstructured,
it is crucial to use multiple sources of information at different
locations. Furthermore, the rate at which this information is up-
dated may be unsatisfactory if only a single UAV is deployed for
either a single, large-scale forest fire or for multiple forest fires.
More efficient forest fire monitoring, detection, and even fight-
ing can be achieved when a fleet of multiple UAVs are deployed
instead of a single UAV (Chao and Chen 2012; Sharifi et al. 2014,
2015a). However, this also requires that more practical algorithms
for task assignment and cooperative control for multiple UAVs be
developed. These algorithms are intended to achieve efficient co-
operation between vehicles for optimal coverage, as well as de-
ployment for the most efficient fire detection, tracking, and
prediction in the cases of large, multiple, and simultaneous fire
events.
Several simulations have demonstrated the effectiveness of dif-
ferent methods for UAV coordination (Alexis et al. 2009; Casbeer
et al. 2005, 2006; Phan and Liu 2008; Kumar et al. 2011). Despite
this, experimental or near-operational works with teams of UAVs
are still underdeveloped (Beard et al. 2006; Bradley and Taylor
2011; Merino et al. 2005, Merino et al. 2006; Sujit et al. 2007), and
no operational implementation can be identified in the literature
at present. Numerous technical challenges must be solved prior to
any practical implementation and application of forest fire mon-
itoring, detection, and fighting strategies using multiple UAVs.
Yuan et al. 789
Published by NRC Research Press
Can.
J.
For.
Res.
Downloaded
from
www.nrcresearchpress.com
by
YALE
UNIV
on
07/19/15
For
personal
use
only.
8. 4. Conclusions and future work
Through a comprehensive review of the existing literature on
the development of UAVs with computer-vision systems, it is ap-
parent that these systems can be used to avoid the drawbacks of
other land- and space-based approaches. Their potential benefits
for forest fire detection, diagnosis, and prognosis are also demon-
strated.
Because forests are highly complex and nonstructured environ-
ments, the use of multiple sources of information at different
locations is critical. The related issue of using vision sensors and
GPS systems to determine fire location is complicated, with very
few researchers addressing this problem. To implement practical,
useful applications, UAVs with longer endurance are also re-
quired. In addition, fire detection remains difficult, given the
chance of smoke blocking the images of the fire or the chance for
analogues of flame characteristics such as sunlight, vegetation,
and animals causing either false alarms or alarm failure. Existing
research has demonstrated that the combination of infrared and
visual images can contribute to robust forest fire detection, in-
cluding high detection probability, low false alarm rates, and
enhanced adaptive capabilities in various environmental con-
ditions. However, there is still the challenge of optimizing an
appropriate number of features in image information fusion.
Meanwhile, miniaturized infrared cameras usually have very low
sensitivities, which can also lead to fire alarm failure; thus longer
detector exposure periods are also needed to generate higher
quality images. Finally, the vibration and motion of UAVs often
blur images and cause image capture failure. Therefore, both
hardware and software techniques for image stabilization and
vibration reduction are worthy of further investigation.
Compared with their use in forest fire monitoring and detec-
tion, research and development on UAV-based fire extinguish-
ment is still scarce and lacking in detailed techniques. However,
there is a trend and demand towards more research and develop-
ment of complete solutions for forest fire monitoring, detection,
and fighting. Thus, we can expect to see the further development
of this area in the future.
Acknowledgement
This work is partially supported by the Natural Sciences and
Engineering Research Council of Canada (NSERC). The authors
also thank reviewers, the Associate Editor, and the Editors for
their helpful and valuable comments and suggestions that helped
to improve the quality of the paper significantly.
References
Abdalhaq, B., Cortés, A., Margalef, T., and Luque, E. 2005. Enhancing wild land
fire prediction on cluster systems applying evolutionary optimization tech-
niques. Future Generation Comp. Syst. 21(1): 61–67. doi:10.1016/j.future.2004.
09.013.
Alexis, K., Nikolakopoulos, G., Tzes, A., and Dritsas, L. 2009. Coordination of
helicopter UAVs for aerial forest-fire surveillance. In Applications of intelli-
gent control to engineering systems. Springer, Netherlands. pp. 169–193.
doi:10.1007/978-90-481-3018-4_7.
Alonso-Betanzos, A., Fontenla-Romero, O., Guijarro-Berdiñas, B., Hernández-Pereira, E.,
Andrade, M.I.P., Jiménez, E., Soto, J.L.L., and Carballas, T. 2003. An intelligent sys-
tem for forest fire risk prediction and firefighting management in Galicia.
Expert Syst. Appl. 25(4): 545–554. doi:10.1016/S0957-4174(03)00095-2.
Ambrosia, V. 2002. Remotely piloted vehicles as fire imaging platforms: the
future is here [online]. Available from http://geo.arc.nasa.gov/sge/UAVFiRE/
completeddemos.html [accessed 28 February 2015].
Ambrosia, V.G., and Zajkowski, T. 2015. Selection of appropriate class UAS/
sensors to support fire monitoring: experiences in the United States. In
Handbook of unmanned aerial vehicles. Edited by K.P. Valavanis and
G.J. Vachtsevano. Springer, Netherlands. pp. 2723–2754. doi:10.1007/978-90-
481-9707-1_73.
Ambrosia, V.G., Wegener, S.S., Sullivan, D.V., Buechel, S.W., Dunagan, S.E.,
Brass, J.A., Stoneburner, J., and Schoenung, S.M. 2003. Demonstrating UAV-
acquired real-time thermal data over fires. Photogramm. Eng. Remote Sens.
69(4): 391–402. doi:10.14358/PERS.69.4.391.
Ambrosia, V.G., Wegener, S., Zajkowski, T., Sullivan, D.V., Buechel, S.,
Enomoto, F., Lobitz, B., Johan, S., Brass, J., and Hinkley, E. 2011. The Ikhana
unmanned airborne system (UAS) western states fire imaging missions: from
concepttoreality(2006–2010).GeocartoInt.26(2):85–101.doi:10.1080/10106049.
2010.539302.
Anderson, H.E. 1968. Fire spread and flame shape. Fire Technol. 4(1): 51–58.
doi:10.1007/BF02588606.
Arrue, B.C., Ollero, A., and Martinez-de, Dios, J.R. 2000. An intelligent system for
false alarm reduction in infrared forest-fire detection. IEEE Intell. Syst. 15(3):
64–73. doi:10.1109/5254.846287.
Beard, R.W., McLain, T.W., Nelson, D.B., Kingston, D., and Johanson, D. 2006.
Decentralized cooperative aerial surveillance using fixed-wing miniature
UAVs. Proc. IEEE, 94(7): 1306–1324. doi:10.1109/JPROC.2006.876930.
Berni, J.A.J., Zarco-Tejada, P.J., Surez, L., and Fereres, E. 2009. Thermal and nar-
rowband multispectral remote sensing for vegetation monitoring from an
unmanned aerial vehicle. IEEE Trans. Geosci. Remote Sens. 47(3): 722–738.
doi:10.1109/TGRS.2008.2010457.
Bosch, I., Serrano, A., and Vergara, L. 2013. Multisensor network system for
wildfire detection using infrared image processing. Sci. World J., Article
ID 402196. doi:10.1155/2013/402196.
Bradley, J.M., and Taylor, C.N. 2011. Georeferenced mosaics for tracking fires
using unmanned miniature air vehicles. J. Aerosp. Comput. Inf. Commun.
8(10): 295–309. doi:10.2514/1.45342.
Byram, G.M. 1959. Combustion of forest fuels. In Forest fire: control and use.
Edited by K.P. Davis. McGraw-Hill, New York. pp. 61–89.
Casbeer, D.W., Beard, R.W., McLain, T.W., Li, S.M., and Mehra, R.K. 2005. Forest
fire monitoring with multiple small UAVs. In Proceedings of American Con-
trol Conference, Portland, Oregon, 8–10 June 2005. pp. 3530–3535. doi:10.
1109/ACC.2005.1470520.
Casbeer, D.W., Kingston, D.B., Bear, A.W., McLain, T.W., Li, S., and Mehra, R.
2006. Cooperative forest fire surveillance using a team of small unmanned
air vehicles. Int. J. Syst. Sci. 37(6): 351–360. doi:10.1080/00207720500438480.
Çelik, T., and Demirel, H. 2009. Fire detection in video sequences using a generic
color model. Fire Saf. J. 44(2): 147–158. doi:10.1016/j.firesaf.2008.05.005.
Çelik, T., Demirel, H., Ozkaramanli, H., and Uyguroglu, M. 2007a. Fire detection
using statistical color model in video sequences. J. Vis. Commun. Image
Represent. 18(2): 176–185. doi:10.1016/j.jvcir.2006.12.003.
Çelik, T., Ozkaramanli, H., and Demirel, H. 2007b. Fire pixel classification
using fuzzy logic and statistical color model. In Proceedings of the Inter-
national Conference on Acoustic Speech Signal Processing (ICASSP),
Hawaii, 15–20 April 2007. pp. I-1205–I-1208. doi:10.1109/ICASSP.2007.366130.
Çetin, A.E., Dimitropoulos, K., Gouverneur, B., Grammalidisb, N., Günaya, O.,
Habiboǧlua, Y.H., Töreyind, B.U., and Verstockte, S. 2013. Video fire
detection — review. Digit. Signal Process. 23(6): 1827–1843. doi:10.1016/j.dsp.
2013.07.003.
Chao, H., and Chen, Y.Q. 2012. Cooperative remote sensing using multiple un-
manned vehicles. In Remote sensing and actuation using unmanned vehi-
cles. Edited by H. Chao and Y.Q. Chen. Wiley-IEEE Press. pp. 121–142. doi:10.
1002/9781118377178.ch6.
Charvat, R., Ozburn, R., Bushong, J., and Cohen, K. 2012. SIERRA team flight of
Zephyr UAV at West Virginia wild land fire burn. In AIAA Infotech@Aerospace
Conference, 19–21 June 2012, Garden Grove, California. pp. 19–21. doi:10.2514/
6.2007-2749.
Chen, J., He, Y., and Wang, J. 2010. Multi-feature fusion based fast video flame
detection. Build. Environ. 45(5): 1113–1122. doi:10.1016/j.buildenv.2009.10.017.
Chen, T., Wu, P., and Chiou, Y. 2004. An early fire-detection method based on
image processing. In Proceedings of the International Conference on Image
Processing, 24–27 October 2004. pp. 1707–1710. doi:10.1109/ICIP.2004.1421401.
Chen, T.H., Yin, Y.H., Huang, S.F., and Ye, Y.T. 2006. The smoke detection for
early fire-alarming system based on video processing. In Proceedings of the
International Conference on Intelligent Information Hiding and Multimedia
Signal Processing, Pasadena, California, 18–20 December 2006. pp. 427–430.
doi:10.1109/IIH-MSP.2006.164.
Chisholm, R.A., Cui, J., Lum, S.K., and Chen, B.M. 2013. UAV LiDAR for below-
canopy forest surveys. J. Unmanned Veh. Syst. 1(1): 61–68. doi:10.1139/juvs-
2013-0017.
Cho, B.H., Bae, J.W., and Jung, S.H. 2008. Image processing-based fire detection
system using statistic color model. In Proceedings of the International Con-
ference on Advanced Language Processing and Web Information Technol-
ogy, Dalian, China, 23–25 July 2008. pp. 245–250. doi:10.1109/ALPIT.2008.49.
Chou, T.Y., Yeh, M.L., Chen, Y.C., and Chen, Y.H. 2010. Disaster monitoring and
management by the unmanned aerial vehicle technology. In Proceedings of
ISPRS Technical Commission VII Symposium. ISPRS, Vienna, Austria.
pp. 137–142.
Den Breejen, E., Breuers, M., Cremer, F., Kemp, R., Roos, M., Schutte, K., and De
Vries, J.S. 1998. Autonomous forest fire detection. In Proceedings of the
3rd International Conference on Forest Fire Research, Luso, Portugal,
16–20 November 1998. pp. 2003–2012.
Denham, M., Cortés, A., Margalef, T., and Luque, E. 2008. Applying a dynamic
data driven genetic algorithm to improve forest fire spread prediction. In
Computational Science — ICCS 2008. Edited by M. Bubak, G.D. van Albada,
J. Dongarra, and P.M.A. Sloot. Springer-Verlag, Berlin, Heidelberg. pp. 36–45.
doi:10.1007/978-3-540-69389-5_6.
Duong, H.D., and Tuan, N.A. 2009. Using Bayes method and fuzzy C-mean algo-
rithm for fire detection in video. In Proceedings of the International Confer-
790 Can. J. For. Res. Vol. 45, 2015
Published by NRC Research Press
Can.
J.
For.
Res.
Downloaded
from
www.nrcresearchpress.com
by
YALE
UNIV
on
07/19/15
For
personal
use
only.
9. ence on Advanced Technologies for Communications, Hai Phong, Vietnam,
12–14 October 2009. pp. 141–144. doi:10.1109/ATC.2009.5349391.
Esposito, F., Rufino, G., Moccia, A., Donnarumma, P., Esposito, M., and Magliulo, V.
2007. An integrated electro-optical payload system for forest fires monitoring
from airborne platform. In Proceedings of IEEE Aerospace Conference.
pp. 1–13. doi:10.1109/AERO.2007.353054.
Everaerts, J. 2008. The use of unmanned aerial vehicles (UAVs) for remote sens-
ing and mapping. The International Archives of the Photogrammetry, Re-
mote Sensing and Spatial Information Sciences, 37: 1187–1192.
Ferruz, J., and Ollero, A. 2000. Real-time feature matching in image sequences
for nonstructured environments: applications to vehicle guidance. J. Intell.
Robot. Syst. 28(1–2): 85–123. doi:10.1023/A:1008163332131.
Fowler, A., Teredesai, A.M., and De Cock, M. 2009. An evolved fuzzy logic system
for fire size prediction. In Proceedings of the 28th North American Fuzzy
Information Processing Society Annual Conference, Cincinnati, U.S.A., 14–17
June 2009. pp. 1–6. doi:10.1109/NAFIPS.2009.5156419.
Gonzalez, J.R., Palahi, M., Trasobares, A., and Pukkala, T. 2006. A fire probability
model for forest stands in Catalonia (north-east Spain). Ann. For. Sci. 63(2):
169–176. doi:10.1051/forest:2005109.
Günay, O., Taşdemir, K., Uğur Töreyin, B., and Çetin, A.E. 2009. Video based
wildfire detection at night. Fire Saf. J. 44(6): 860–868. doi:10.1016/j.firesaf.
2009.04.003.
Günay, O., Toreyin, B.U., Kose, K., and Cetin, A.E. 2012. Entropy-functional-based
online adaptive decision fusion framework with application to wildfire de-
tection in video. IEEE Trans. Image Process. 21(5): 2853–2865. doi:10.1109/TIP.
2012.2183141.
Habib, A., Pullivelli, A., Mitishita, E., Ghanma, M., and Kim, E.M. 2006. Stability
analysis of low-cost digital cameras for aerial mapping using different geo-
referencing techniques. The Photogrammetric Record, 21(113): 29–43. doi:10.
1111/j.1477-9730.2006.00352.x.
Ham, S.J., Ko, B.C., and Nam, J.Y. 2010. Fire-flame detection based on fuzzy finite
automation. In Proceedings of the 20th International Conference on Pattern
Recognition (ICPR), IEEE, Istanbul, Turkey, 23–26 August 2010.
pp. 3919–3922. doi:10.1109/ICPR.2010.953.
Hirsch, K.G., and Fuglem, P. 2006. Canadian Wildland Fire Strategy: background
syntheses, analyses, and perspectives. Canadian Council for Forest Minis-
tries, Natural Resources Canada, Canadian Forest Service, Northern Forestry
Centre, Edmonton, Alberta.
Ho, C.C., and Kuo, T.H. 2009. Real-time video-based fire smoke detection system.
In Proceedings of the IEEE/ASME International Conference on Advanced In-
telligent Mechatronics, Singapore, 14–17 July 2009. pp. 1845–1850. doi:10.1109/
AIM.2009.5229791.
HomChaudhuri, B., and Kumar, M. 2010. Optimal fireline generation for wildfire
fighting in uncertain and heterogeneous environment. In Proceedings of the
American Control Conference, Baltimore, U.S.A., 30 June – 2 July 2010.
pp. 5638–5643.
Hou, J., Qian, J., Zhao, Z., Pan, P., and Zhang, W. 2009. Fire detection algorithms
in video images for high and large-span space structures. In Proceedings of
the IEEE 2nd International Congress on Image and Signal Processing, Tianjin,
China, 17–19 October 2009. pp. 1–5. doi:10.1109/CISP.2009.5304997.
Hou, J., Qian, J., Zhang, W., Zhao, Z., and Pan, P. 2010. Fire detection algorithms
for video images of large space structures. Multimedia Tools and Applica-
tions, 52(1): 45–63. doi:10.1007/s11042-009-0451-0.
Hunt, E.R., Jr., Hively, W.D., Daughtry, C., McCarty, G.W., Fujikawa, S.J., Ng, T.L.,
Tranchitella, M., Linden, D.S., and Yoel, D.W. 2008. Remote sensing of crop
leaf area index using unmanned airborne vehicles. In Proceedings of the
Pecora 17 Symposium, Denver, Colorado, U.S.A., March 2008.
Huseynov, J.J., Baliga, S., Widmer, A., and Boger, Z. 2007. Infrared flame detec-
tion system using multiple neural networks. In Proceedings of the Inter-
national Joint Conference on Neural Networks, Orlando, Florida, U.S.A.,
12–17 August 2007. pp. 608–612. doi:10.1109/IJCNN.2007.4371026.
Jaber, A., Guarnieri, F., and Wybo, J.L. 2001. Intelligent software agents for forest
fire prevention and fighting. Safety Sci. 39(1): 3–17. doi:10.1016/S0925-7535
(01)00021-2.
Jin, H., and Zhang, R.B. 2009. A fire and flame detecting method based on video.
In Proceedings of the International Conference on Machine Learning and
Cybernetics, Baoding, China, 12–15 July 2009. pp. 2347–2352. doi:10.1109/
ICMLC.2009.5212165.
Jones, G.P., IV. 2003. The feasibility of using small unmanned aerial vehicles for
wildlife research. Doctoral dissertation, University of Florida.
Jones, G.P., IV, Pearlstine, L.G., and Percival, H.F. 2006. An assessment of small
unmanned aerial vehicles for wildlife research. Wildl. Soc. Bull. 34(3): 750–
758. doi:10.2193/0091-7648(2006)34[750:AAOSUA]2.0.CO;2.
Karafyllidis, I. 2004. Design of a dedicated parallel processor for the prediction
of forest fire spreading using cellular automata and genetic algorithms. Eng.
Appl. Artif. Intell. 17(1): 19–36. doi:10.1016/j.engappai.2003.12.001.
Kolarić, D., Skala, K., and Dubravić, A. 2008. Integrated system for forest fire
early detection and management. Period. Biol. 110(2): 205–211.
Kontitsis, M., Valavanis, K.P., and Tsourveloudis, N. 2004. A UAV vision system
for airborne surveillance. In Proceedings IEEE International Conference on
Robotics and Automation, 2004. Vol. 1. pp. 77–83. doi:10.1109/ROBOT.2004.
1307132.
Kumar, M., Cohen, K., and HomChaudhuri, B. 2011. Cooperative control of mul-
tiple uninhabited aerial vehicles for monitoring and fighting wildfires.
J. Aerosp. Comput. Inf. Commun. 8(1): 1–16. doi:10.2514/1.48403.
Leckie, D.G. 1990. Advances in remote sensing technologies for forest surveys
and management. Can. J. For. Res. 20(4): 464–483. doi:10.1139/x90-063.
Li, C., Zhang, G., Lei, T., and Gong, A. 2011. Quick image-processing method of
UAV without control points data in earthquake disaster area. Trans. Nonfer-
rous Met. Soc. China, 21: s523–s528. doi:10.1016/S1003-6326(12)61635-5.
Li, M., Xu, W., Xu, K., Fan, J., and Hou, D. 2013. Review of fire detection technol-
ogies based on video images. J. Theor. Appl. Inf. Technol. 49(2): 700–707.
Li, Y., Vodacek, A., and Zhu, Y. 2007. An automatic statistical segmentation
algorithm for extraction of fire and smoke regions. Remote Sens. Environ.
108(2): 171–178. doi:10.1016/j.rse.2006.10.023.
Lin, H., Liu, Z., Zhao, T., and Zhang, Y. 2014. Early warning system of forest fire
detection based on video technology. In Proceedings of the 9th International
Symposium on Linear Drives for Industry Applications. Edited by X.Z. Liu
and Y.Y. Ye. Springer, Heidelberg, Berlin. pp. 751–758. doi:10.1007/978-3-642-
40633-1_93.
Mahdipour, E., and Dadkhah, C. 2012. Automatic fire detection based on soft
computing techniques: review from 2000 to 2010. Artif. Intell. Rev. pp. 1–40.
doi:10.1007/s10462-012-9345-z.
Mandallaz, D., and Ye, R. 1997. Prediction of forest fires with Poisson models.
Can. J. For. Res. 27(10): 1685–1694. doi:10.1139/x97-103.
Martínez-de Dios, J.R., and Ollero, A. 2004. A new training based approach for
robust thresholding. In Proceedings of the World Automation Congress, Sev-
ille, Spain, 28 June – 1 July 2004. pp. 121–126.
Martínez-de Dios, J.R., Ramiro, J., Merino, L., and Ollero, A. 2005. Fire detection
using autonomous aerial vehicles with infrared and visual cameras. In Pro-
ceedings of the 16th IFAC World Congress. Czech Republic.
Martínez-de Dios, J.R., André, J., Gonçalves, J.C., Arrue, B.C., Ollero, A., and
Viegas, D.X. 2006. Laboratory fire spread analysis using visual and infrared
images. Int. J. Wildland Fire, 15(2): 179–186. doi:10.1071/WF05004.
Martínez-de Dios, J.R., Merino, L., Ollero, A., Ribeiro, L.M., and Viegas, X. 2007.
Multi-UAV experiments: application to forest fires. In Multiple heteroge-
neous unmanned aerial vehicles. Edited by A. Ollero and I. Maza. Springer,
Heidelberg, Berlin. pp. 207–228. doi:10.1007/978-3-540-73958-6_8.
Martínez-de Dios, J.R., Arrue, B.C., Ollero, A., Merino, L., and Gómez-Rodríguez, F. 2008.
Computer vision techniques for forest fire perception. Image Vision Comput.
26(4): 550–562. doi:10.1016/j.imavis.2007.07.002.
Martínez-de Dios, J.R., Merino, L., Caballero, F., and Ollero, A. 2011. Automatic
forest-fire measuring using ground stations and unmanned aerial systems.
Sensors, 11(6): 6328–6353. doi:10.3390/s110606328.
Maza, I., Caballero, F., Capitán, J., Martínez-de-Dios, J.R., and Ollero, A. 2010.
Experimental results in multi-UAV coordination for disaster management
and civil security applications. J. Intell. Robot. Syst. 61(1–4): 563–585. doi:10.
1007/s10846-010-9497-5.
Maza, I., Caballero, F., Capitan, J., Martinez-de-Dios, J.R., and Ollero, A. 2011. A
distributed architecture for a robotic platform with aerial sensor transpor-
tation and self-deployment capabilities. J. Field Robot. 28(3): 303–328. doi:10.
1002/rob.20383.
McAlpine, R.S., and Wotton, B.M. 1993. The use of fractal dimension to improve
wildland fire perimeter predictions. Can. J. For. Res. 23(6): 1073–1077. doi:10.
1139/x93-137.
Merino, L., Caballero, F., Martinez-de Dios, J.R., and Ollero, A. 2005. Cooperative
fire detection using unmanned aerial vehicles. In Proceedings of the 2005
IEEE International Conference on Robotics and Automation (ICRA), Busan,
South Korea, 18–22 April 2005. IEEE. pp. 1884–1889. doi:10.1109/ROBOT.2005.
1570388.
Merino, L., Caballero, F., Martínez-de Dios, J.R., Ferruz, J., and Ollero, A. 2006. A
cooperative perception system for multiple UAVs: application to automatic
detection of forest fires. J. Field Robot. 23(3–4): 165–184. doi:10.1002/rob.
20108.
Merino, L., Caballero, F., Martinez-de-Dios, J.R., Maza, I., and Ollero, A. 2010.
Automatic forest fire monitoring and measurement using unmanned aerial
vehicles. In Proceedings of the 6th International Congress on Forest Fire
Research. Edited by D.X. Viegas. Coimbra, Portugal, 2010.
Merino, L., Caballero, F., Martínez-de-Dios, J.R., Maza, I., and Ollero, A. 2012. An
unmanned aircraft system for automatic forest fire monitoring and measure-
ment. J. Intell. Robot. Syst. 65(1–4): 533–548. doi:10.1007/s10846-011-9560-x.
Merino, L., Martínez-de Dios, J.R. and Ollero, A. 2015. Cooperative unmanned
aerial systems for fire detection, monitoring, and extinguishing. In Hand-
book of unmanned aerial vehicles. Edited by K.P. Valavanis and G.J. Vachtseva-
nos. Springer, Netherlands. pp. 2693–2722. doi:10.1007/978-90-481-9707-1_74.
Mostafa, M.M., and Schwarz, K.P. 2001. Digital image georeferencing from a
multiple camera system by GPS/INS. ISPRS J. Photogram. Remote Sens. 56(1):
1–12. doi:10.1016/S0924-2716(01)00030-2.
Olivas, J.A. 2003. Forest fire prediction and management using soft computing.
In Proceedings of the International Conference on Industrial informatics,
21–24 August 2003. IEEE. pp. 338–344. doi:10.1109/INDIN.2003.1300349.
Ollero, A., and Merino, L. 2006. Unmanned aerial vehicles as tools for forest-fire
fighting. For. Ecol. Manage. 234(1): 263–274. doi:10.1016/j.foreco.2006.08.292.
Ollero, A., Arrue, B.C., Martínez-de Dios, J.R., and Murillo, J.J. 1999. Techniques
for reducing false alarms in infrared forest-fire automatic detection systems.
Control Eng. Practice, 7(1): 123–131. doi:10.1016/S0967-0661(98)00141-5.
Yuan et al. 791
Published by NRC Research Press
Can.
J.
For.
Res.
Downloaded
from
www.nrcresearchpress.com
by
YALE
UNIV
on
07/19/15
For
personal
use
only.
10. Ollero, A., Ferruz, J., Caballero, F., Hurtado, S., and Merino, L. 2004. Motion
compensation and object detection for autonomous helicopter visual navi-
gation in the COMETS system. In Proceedings of the IEEE International Con-
ference on Robotics and Automation, 26 April – 1 May 2004. IEEE. pp. 19–24.
doi:10.1109/ROBOT.2004.1307123.
Ollero, A., Lacroix, S., Merino, L., Gancet, J., Wiklund, J., Remuss, V.,
Gutierrez, L.G., Viegas, D.X., Gonzalez, M.A., Mallet, A., Alami, R., Chatila, R.,
Hommel, G., Colmenero, F.J., Veiga, I., Arrue, B., Ferruz, J.,
Martínez-de Dios, J.R., and Caballero, F. 2005. Multiple eyes in the skies:
architecture and perception issues in the COMETS unmanned air vehicles
project. IEEE Robot. Automat. 12(2): 46–57. doi:10.1109/MRA.2005.1458323.
Olsson, H., Egberth, M., Engberg, J., Fransson, J.E.S., Pahlén, T.G., Hagner, O.,
Holmgren, J., Joyce, S., Magnusson, M., Nilsson, B., Nilsson, M., Olofsson, K.,
Reese, H., and Wallerman, J. 2005. Current and emerging operational uses of
remote sensing in Swedish forestry. In Proceedings of the 5th Annual Forest
Inventory and Analysis Symposium. USDA Forest Service, Portland, Oregon.
pp. 39–46.
Ononye, A.E., Vodacek, A., and Saber, E. 2007. Automated extraction of fire line
parameters from multispectral infrared images. Remote Sens. Environ.
108(2): 179–188. doi:10.1016/j.rse.2006.09.029.
Pastor, E., Águeda, A., Andrade-Cetto, J., Muñoz, M., Pérez, Y., and Planas, E.
2006. Computing the rate of spread of linear flame fronts by thermal image
processing. Fire Saf. J. 41(8): 569–579. doi:10.1016/j.firesaf.2006.05.009.
Pastor, E., Barrado, C., Royo, P., Santamaria, E., Lopez, J., and Salami, E. 2011.
Architecture for a helicopter-based unmanned aerial systems wildfire sur-
veillance system. Geocarto Int. 26(2): 113–131. doi:10.1080/10106049.2010.
531769.
Phan, C., and Liu, H.H. 2008. A cooperative UAV/UGV platform for wildfire de-
tection and fighting. In Asia Simulation Conference — 7th International
Conference on System Simulation and Scientific Computing, October 2008.
pp. 494–498. doi:10.1109/ASC-ICSC.2008.4675411.
Philips, W., Shah, M., and da Vitoria, Lobo, N. 2002. Flame recognition in video.
Pattern Recogn. Lett. 23(1–3): 319–327. doi:10.1016/S0167-8655(01)00135-0.
Qi, X., and Ebert, J. 2009. A computer vision based method for fire detection in
color videos. Int. J. Imaging, 2(S09): 22–34.
Qian, Y., Chen, S., Lu, P., Cui, T., Ma, M., Liu, Y., Zhou, C., and Zhao, L. 2012.
Application of low-altitude remote sensing image by unmanned airship in
geological hazards investigation. In Proceedings of 5th International Con-
gress on Image and Signal Processing, Chongqing, 16–18 October 2012.
pp. 1015–1018. doi:10.1109/CISP.2012.6469641.
Restas, A. 2006. Wildfire management supported by UAV based air
reconnaissance: experiments and results at the Szendro fire department,
Hungary. In 1st International Workshop on Fire Management.
Rossi, L., Molinier, T., Akhloufi, M., Tison, Y., and Pieri, A. 2010. A 3D vision
system for the measurement of the rate of spread and the height of fire
fronts. Meas. Sci. Technol. 21(10): 1–12. doi:10.1088/0957-0233/21/10/105501.
Rudz, S., Chetehouna, K., Hafiane, A., Sero-Guillaume, O., and Laurent, H. 2009.
On the evaluation of segmentation methods for wildland fire. In Advanced
concepts for intelligent vision systems. Edited by J. Blanc-Talon, W. Philips, D.
Popescu, and P. Scheunders. Springer, Heidelberg, Berlin. pp. 12–23. doi:10.
1007/978-3-642-04697-1_2.
Shahbazi, M., Théau, J., and Ménard, P. 2014. Recent applications of unmanned
aerial imagery in natural resource management. GIScience & Remote Sens-
ing, 51(4): 339–365. doi:10.1080/15481603.2014.926650.
Sharifi, F., Zhang, Y.M., and Aghdam, A.G. 2014. Forest fire detection and moni-
toring using a network of autonomous vehicles. In The 10th International
Conference on Intelligent Unmanned Systems (ICIUS 2014), 29 September –
1 October 2014, Montreal, Quebec, Canada.
Sharifi, F., Chamseddine, A., Mahboubi, H., Zhang, Y.M., and Aghdam, A.G.
2015a. A distributed deployment strategy for a network of cooperative auton-
omous vehicles, IEEE Trans. Contr. Syst. Technol. 23(2): 737–745. doi:10.1109/
TCST.2014.2341658.
Sharifi, F., Mirzaei, M., Zhang, Y.M., and Gordon, B.W. 2015b. Cooperative multi-
vehicle search and coverage problem in an uncertain environment. Un-
manned Systems, 3(1): 35–47. doi:10.1142/S230138501550003X.
Sujit, P.B., Kingston, D., and Beard, R. 2007. Cooperative forest fire monitoring
using multiple UAVs. In 46th IEEE Conference on Decision and Control,
December 2007. pp. 4875–4880. doi:10.1109/CDC.2007.4434345.
Surit, S., and Chatwiriya, W. 2011. Forest fire smoke detection in video based on
digital image processing approach with static and dynamic characteristic
analysis. In Proceedings of the 1st ACIS/JNU International Conference on
Computers, Networks, Systems, and Industrial Engineering, Jeju Island,
South Korea, 23–25 May 2011. IEEE. pp. 35–39. doi:10.1109/CNSI.2011.47.
Töreyin, B.U., Dedeoğlu, Y., and Çetin, A.E. 2005. Flame detection in video using
hidden Markov models. In Proceedings of the IEEE International Conference
on Image Processing, 11–14 September 2005. IEEE. pp. 1230–1233. doi:10.1109/
ICIP.2005.1530284.
Töreyin, B.U., Dedeoğlu, Y., Güdükbay, U., and Çetin, A.E. 2006a. Computer
vision based method for real-time fire and flame detection. Pattern Recogn.
Lett. 27(1): 49–58. doi:10.1016/j.patrec.2005.06.015.
Töreyin, B.U., Dedeoğlu, Y., and Çetin, A.E. 2006b. Contour based smoke detec-
tion in video using wavelets. In Proceedings of the 14th European Signal
Processing Conference, Florence, Italy, 4–8 September 2006. pp. 123–128.
Tranchitella, M., Fujikawa, S., Ng, T.L., Yoel, D., Tatum, D., Roy, P., Mazel, C.,
Herwitz, S., and Hinkley, E. 2007. Using tactical unmanned aerial systems to
monitor and map wildfires. In Proceedings of the AIAA Infotech@Aerospace
2007 Conference, California, U.S.A. doi:10.2514/6.2007-2749.
UAS VISION. 2014. K-Max takes on firefighting mission [online]. Available from
http://www.uasvision.com/2014/11/20/k-max-takes-on-firefighting-mission/
?utm_source=Newsletter&utm_medium=email&utm_campaign=d718fc2559-
RSS_EMAIL_CAMPAIGN&utm_term=0_799756aeb7-d718fc2559-297548553
[accessed 28 February 2015].
Van Persie, M., Oostdijk, A., Fix, A., van Sijl, M.C., and Edgardh, L. 2011. Real-time
UAV based geospatial video integrated into the fire brigades crisis manage-
ment GIS system. In International Archives of the Photogrammetry, Remote
Sensing and Spatial Information Sciences, 14–16 September 2011, Zurich,
Switzerland. Vol. 38. pp. 173–175.
Vipin, V. 2012. Image processing based forest fire detection. Emerging Technol-
ogy and Advanced Engineering, 2(2): 87–95.
Wendt, K. 2008. Efficient knowledge retrieval to calibrate input variables in
forest fire prediction. M.Sc. thesis, Computational Science and Engineering,
Universidad Autónoma de Barcelona, Barcelona, Spain.
Wilson, C., and Davis, J.B. 1988. Forest fire laboratory at riverside and fire re-
search in California: past present, and future. USDA Forest Service, Pacific
Southwest Forest and Range Experiment Station, Berkeley, California, Gen-
eral Technical Report PSW-105.
Wirth, M., and Zaremba, R. 2010. Flame region detection based on histogram
back projection. In Proceedings of the Canadian Conference Computer and
Robot Vision, Ottawa, Canada, 31 May – 2 June 2010. pp. 167–174. doi:10.1109/
CRV.2010.29.
Wu, J., and Zhou, G. 2006. Real-time UAV video processing for quick-response to
natural disaster. In Proceedings of IEEE International Geoscience and Remote
Sensing Symposium, Denver, Colorado, U.S.A., 31 July – 4 August 2006.
pp. 976–979. doi:10.1109/IGARSS.2006.251.
Yu, C.Y., Zhang, Y.M., Fang, J., and Wang, J.J. 2009. Texture analysis of smoke for
real-time fire detection. In The 2nd International Workshop on Computer
Science and Engineering, Qingdao, China, 28–30 October 2009. IEEE.
pp. 511–515. doi:10.1109/WCSE.2009.864.
Yu, C.Y., Fang, J., Wang, J.J., and Zhang, Y.M. 2010. Video fire smoke detection
using motion and color features. Fire Technol. 46(3): 651–663. doi:10.1007/
s10694-009-0110-z.
Yuan, F. 2008. A fast accumulative motion orientation model based on integral
image for video smoke detection. Pattern Recogn. Lett. 29(7): 925–932. doi:
10.1016/j.patrec.2008.01.013.
Zhang, D., Hu, A., Rao, Y., Zhao, J., and Zhao, J. 2007. Forest fire and smoke
detection based on video image segmentation. In Proceedings of the Interna-
tional Symposium on Multispectral Image Processing and Pattern Recogni-
tion, Wuhan, China, 15 November 2007. Edited by S.J. Maybank, M. Ding,
F. Wahl, and Y. Zhu. doi:10.1117/12.751611.
Zhang, Y.M., and Jiang, J. 2008. Bibliographical review on reconfigurable fault-
tolerant control systems. Annu. Rev. Control, 32(2): 229–252. doi:10.1016/j.
arcontrol.2008.03.008.
Zhao, J., Zhang, Z., Han, S., Qu, C., Yuan, Z., and Zhang, D. 2011. SVM based forest
fire detection using static and dynamic features. Comput. Sci. Inf. Syst. 8(3):
821–841. doi:10.2298/CSIS101012030Z.
Zhou, G. 2009. Near real-time orthorectification and mosaic of small UAV video
flow for time-critical event response. IEEE Trans. Geosci. Remote Sens. 47(3):
739–747. doi:10.1109/TGRS.2008.2006505.
Zhou, G., Li, C., and Cheng, P. 2005. Unmanned aerial vehicle (UAV) real-time
video registration for forest fire monitoring. In Proceedings of Geoscience
and Remote Sensing Symposium, 25–29 July 2005. IEEE. pp. 1803–1806. doi:
10.1109/IGARSS.2005.1526355.
792 Can. J. For. Res. Vol. 45, 2015
Published by NRC Research Press
Can.
J.
For.
Res.
Downloaded
from
www.nrcresearchpress.com
by
YALE
UNIV
on
07/19/15
For
personal
use
only.