This document describes a hybrid GWR-based method for building detection using sparse LiDAR data and aerial photos. It begins by classifying the LiDAR data and performing unsupervised image classification of the aerial photo, which cannot fully separate buildings from other surfaces. It then uses GWR to estimate height values for all pixels by integrating characteristics from LiDAR and aerial photo data. GWR models relate height values from LiDAR to spectral values of surrounding pixels based on distance. This predicts height rasters to extract building pixels clustered above a certain height threshold. The method was tested on Florida data and found more accurate than single-data classification.
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...CSCJournals
Extraction of geospatial data from the photogrammetric sensing images becomes more and more important with the advances in the technology. Today Geographic Information Systems are used in a large variety of applications in engineering, city planning and social sciences. Geospatial data like roads, buildings and rivers are the most critical feeds of a GIS database. However, extracting buildings is one of the most complex and challenging tasks as there exist a lot of inhomogeneity due to varying hierarchy. The variety of the type of buildings and also the shapes of rooftops are very inconstant. Also in some areas, the buildings are placed irregularly or too close to each other. For these reasons, even by using high resolution IKONOS and QuickBird satellite imagery the quality percentage of building extraction is very less. This paper proposes a solution to the problem of automatic and unsupervised extraction of building features irrespective of rooftop structures in multispectral satellite images. The algorithm instead of detecting the region of interest, eliminates areas other than the region of interest which extract the rooftops completely irrespective of their shapes. Extensive tests indicate that the methodology performs well to extract buildings in complex environments.
3D BUILDING Modeling with MULTI-SOURCE DATA: A STUDY OF HIGH-DENSITY URBAN ...DeWolf Xue
Chen, K., Xue, F., and Lu, W. (2017). “Development of 3D building models using multi-source data: A study of high-density urban area in Hong Kong.” In: LC3 2017: Volume I – Proceedings of the Joint Conference on Computing in Construction (JC3), July 4-7, 2017, Heraklion, Greece, pp. 611-618. DOI: https://doi.org/10.24928/JC3-2017/0252.
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...CSCJournals
Extraction of geospatial data from the photogrammetric sensing images becomes more and more important with the advances in the technology. Today Geographic Information Systems are used in a large variety of applications in engineering, city planning and social sciences. Geospatial data like roads, buildings and rivers are the most critical feeds of a GIS database. However, extracting buildings is one of the most complex and challenging tasks as there exist a lot of inhomogeneity due to varying hierarchy. The variety of the type of buildings and also the shapes of rooftops are very inconstant. Also in some areas, the buildings are placed irregularly or too close to each other. For these reasons, even by using high resolution IKONOS and QuickBird satellite imagery the quality percentage of building extraction is very less. This paper proposes a solution to the problem of automatic and unsupervised extraction of building features irrespective of rooftop structures in multispectral satellite images. The algorithm instead of detecting the region of interest, eliminates areas other than the region of interest which extract the rooftops completely irrespective of their shapes. Extensive tests indicate that the methodology performs well to extract buildings in complex environments.
3D BUILDING Modeling with MULTI-SOURCE DATA: A STUDY OF HIGH-DENSITY URBAN ...DeWolf Xue
Chen, K., Xue, F., and Lu, W. (2017). “Development of 3D building models using multi-source data: A study of high-density urban area in Hong Kong.” In: LC3 2017: Volume I – Proceedings of the Joint Conference on Computing in Construction (JC3), July 4-7, 2017, Heraklion, Greece, pp. 611-618. DOI: https://doi.org/10.24928/JC3-2017/0252.
Generating three dimensional photo-realistic model of archaeological monument...eSAT Journals
Abstract
Stimulated by the growing demand for three-dimensional (3D) photo-realistic model of archaeological monument, people are looking
for efficient and more precise methods to generate them. However, a single method is not yet available to give adequate results in all
situations, especially high geometric accuracy, automation, photorealism as well as flexibility and efficiency. This paper describes the
method for generating 3D photo realistic model of Bukit Batu Pahat shrine by the means of multi-sensors data integration. Global
Positioning System (GPS), Total Station and terrestrial laser scanner (Leica ScanStation C10) were used to record spatial data of the
shrine. Nevertheless, due to the low quality of the coloured point cloud captured by the scanner, a digital camera, Nikon D300s was
used to capture photos of the shrine for surface texturing purpose. A photo realistic 3D model with high geometric accuracy of the
shrine was generated through spatial and image data integration. A feature mapping accuracy and geometric mapping accuracy was
conducted to analyse the quality of the 3D photo-realistic model of the monument. Based on the results obtained, the integration of
ScanStation C10 and images data is capable of providing a 3D photo-realistic model of Bukit Batu Pahat shrine where feature
mapping accuracy showed that the model was 90.56 percent similar with the real object. Additionally, the geometrical accuracy of the
model generated by ScanStation C10 data was very convincing which was ±4 millimetres. In summary, the goal of this research has
successfully achieved where a 3D photo- realistic model of Bukit Batu Pahat shrine with good geometry was generated through multisensors
data integration.
Keywords: archaeological monument, terrestrial laser scanner, digital photo, 3D photo-realistic model
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Digital Heritage Documentation Via TLS And Photogrammetry Case Studytheijes
In the last decade, several manual tradition measurement techniques were used to document the heritage buildings around the word; however, some of these techniques take a long time, often lack completeness, and may sometimes give unreliable information. In contrast, terrestrial laser scanning “TLS” surveys and Photogrammetry have already been undertaken in several heritage sites in the United Kingdom and other countries of Europe as a new method of documenting heritagesites. This paper focuses on using the TLS and Photogrammetry methods to document one of the important houses in Historic Jeddah, Saudi Arabia, which is Nasif Historical House, as an example of Digital Heritage Documentation (DHD).
16 9252 eeem gis based satellite image denoising (edit ari)IAESIJEECS
Generally, satellite images contain very significant information about geographical features such as rivers, roads, building and bridges etc of the earth. Geographic Information System (GIS) requires these features for automatic detection and it has been corrupted by various types of noise. Curvelet Transform (CT) is used in the proposed system for denoising the images. Advantages of multi resolution image such as line, compatibility of human visual system and edge detection are provided. Then K-Means clustering is used in this system for segmentation purpose after the pre processing done. First, K-Means algorithm is used for segmenting background and water then extraction of bridges is done based on pixel intensity difference.
Classification of Satellite broadcasting Image and Validation Exhausting Geom...IJSRD
Classification of Land Use/Land Cover (LULC) data from satellite images is extremely remarkable to design the thematic maps for analysis of natural resources like Forest, Agriculture, Water bodies, urban areas etc. The process of Satellite Image Classification involves grouping the pixel values into significant categories and estimating areas by counting each category pixels. Manual classification by visual interpretation technique is accurate but time consuming and requires field experts. To overcome these difficulties, the present research work investigated efficient and effective automation of satellite image classification. Automated classification approaches are broadly classified in to i) Supervised Classification ii) Unsupervised Classification iii) Object Based Classification. This paper presents classification capabilities of K-Means, Parallel Pipe and Maximum Likelihood classifiers to classify multispectral spatial data (LISS-4). Using statistical inference, classified results are validated with reference data collected from field experts. Among three, Maximum Likelihood classifier (MLC) gained a significant credit in terms of getting maximum Overall accuracy and Kappa Factor.
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Automatic Road Extraction from Airborne LiDAR : A ReviewIJERA Editor
LiDAR is the powerful Remote Sensing Technology for the acquisition of 3D information from terrain surface. This paper surveys the state of the art on automated road feature extraction from airborne Light Detection and Ranging (LiDAR) data. It presents a bibliography of nearly 50 references related to this topic. This includes work related to various main approaches used for extracting road from LiDAR data, Feature extraction based on classification and filtering.
An important measurable indicator of urbanization and its environmental implications has been identified as the urban
impervious surface. It presents a strategy based on three-dimensional convolutional neural networks (3D CNNs) for extracting
urbanization from the LiDAR datasets using deep learning technology. Various 3D CNN parameters are tested to see how they
affect impervious surface extraction. For urban impervious surface delineation, this study investigates the synergistic
integration of multiple remote sensing datasets of Azad Kashmir, State of Pakistan, to alleviate the restrictions imposed by
single sensor data. Overall accuracy was greater than 95% and overall kappa value was greater than 90% in our suggested 3D
CNN approach, which shows tremendous promise for impervious surface extraction. Because it uses multiscale convolutional
processes to combine spatial and spectral information and texture and feature maps, we discovered that our proposed 3D
CNN approach makes better use of urbanization than the commonly utilized pixel-based support vector machine classifier. In
the fast-growing big data era, image analysis presents significant obstacles, yet our proposed 3D CNNs will effectively extract
more urban impervious surfaces
Generating three dimensional photo-realistic model of archaeological monument...eSAT Journals
Abstract
Stimulated by the growing demand for three-dimensional (3D) photo-realistic model of archaeological monument, people are looking
for efficient and more precise methods to generate them. However, a single method is not yet available to give adequate results in all
situations, especially high geometric accuracy, automation, photorealism as well as flexibility and efficiency. This paper describes the
method for generating 3D photo realistic model of Bukit Batu Pahat shrine by the means of multi-sensors data integration. Global
Positioning System (GPS), Total Station and terrestrial laser scanner (Leica ScanStation C10) were used to record spatial data of the
shrine. Nevertheless, due to the low quality of the coloured point cloud captured by the scanner, a digital camera, Nikon D300s was
used to capture photos of the shrine for surface texturing purpose. A photo realistic 3D model with high geometric accuracy of the
shrine was generated through spatial and image data integration. A feature mapping accuracy and geometric mapping accuracy was
conducted to analyse the quality of the 3D photo-realistic model of the monument. Based on the results obtained, the integration of
ScanStation C10 and images data is capable of providing a 3D photo-realistic model of Bukit Batu Pahat shrine where feature
mapping accuracy showed that the model was 90.56 percent similar with the real object. Additionally, the geometrical accuracy of the
model generated by ScanStation C10 data was very convincing which was ±4 millimetres. In summary, the goal of this research has
successfully achieved where a 3D photo- realistic model of Bukit Batu Pahat shrine with good geometry was generated through multisensors
data integration.
Keywords: archaeological monument, terrestrial laser scanner, digital photo, 3D photo-realistic model
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Digital Heritage Documentation Via TLS And Photogrammetry Case Studytheijes
In the last decade, several manual tradition measurement techniques were used to document the heritage buildings around the word; however, some of these techniques take a long time, often lack completeness, and may sometimes give unreliable information. In contrast, terrestrial laser scanning “TLS” surveys and Photogrammetry have already been undertaken in several heritage sites in the United Kingdom and other countries of Europe as a new method of documenting heritagesites. This paper focuses on using the TLS and Photogrammetry methods to document one of the important houses in Historic Jeddah, Saudi Arabia, which is Nasif Historical House, as an example of Digital Heritage Documentation (DHD).
16 9252 eeem gis based satellite image denoising (edit ari)IAESIJEECS
Generally, satellite images contain very significant information about geographical features such as rivers, roads, building and bridges etc of the earth. Geographic Information System (GIS) requires these features for automatic detection and it has been corrupted by various types of noise. Curvelet Transform (CT) is used in the proposed system for denoising the images. Advantages of multi resolution image such as line, compatibility of human visual system and edge detection are provided. Then K-Means clustering is used in this system for segmentation purpose after the pre processing done. First, K-Means algorithm is used for segmenting background and water then extraction of bridges is done based on pixel intensity difference.
Classification of Satellite broadcasting Image and Validation Exhausting Geom...IJSRD
Classification of Land Use/Land Cover (LULC) data from satellite images is extremely remarkable to design the thematic maps for analysis of natural resources like Forest, Agriculture, Water bodies, urban areas etc. The process of Satellite Image Classification involves grouping the pixel values into significant categories and estimating areas by counting each category pixels. Manual classification by visual interpretation technique is accurate but time consuming and requires field experts. To overcome these difficulties, the present research work investigated efficient and effective automation of satellite image classification. Automated classification approaches are broadly classified in to i) Supervised Classification ii) Unsupervised Classification iii) Object Based Classification. This paper presents classification capabilities of K-Means, Parallel Pipe and Maximum Likelihood classifiers to classify multispectral spatial data (LISS-4). Using statistical inference, classified results are validated with reference data collected from field experts. Among three, Maximum Likelihood classifier (MLC) gained a significant credit in terms of getting maximum Overall accuracy and Kappa Factor.
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Automatic Road Extraction from Airborne LiDAR : A ReviewIJERA Editor
LiDAR is the powerful Remote Sensing Technology for the acquisition of 3D information from terrain surface. This paper surveys the state of the art on automated road feature extraction from airborne Light Detection and Ranging (LiDAR) data. It presents a bibliography of nearly 50 references related to this topic. This includes work related to various main approaches used for extracting road from LiDAR data, Feature extraction based on classification and filtering.
An important measurable indicator of urbanization and its environmental implications has been identified as the urban
impervious surface. It presents a strategy based on three-dimensional convolutional neural networks (3D CNNs) for extracting
urbanization from the LiDAR datasets using deep learning technology. Various 3D CNN parameters are tested to see how they
affect impervious surface extraction. For urban impervious surface delineation, this study investigates the synergistic
integration of multiple remote sensing datasets of Azad Kashmir, State of Pakistan, to alleviate the restrictions imposed by
single sensor data. Overall accuracy was greater than 95% and overall kappa value was greater than 90% in our suggested 3D
CNN approach, which shows tremendous promise for impervious surface extraction. Because it uses multiscale convolutional
processes to combine spatial and spectral information and texture and feature maps, we discovered that our proposed 3D
CNN approach makes better use of urbanization than the commonly utilized pixel-based support vector machine classifier. In
the fast-growing big data era, image analysis presents significant obstacles, yet our proposed 3D CNNs will effectively extract
more urban impervious surfaces
Laser ScanningLaser scanning is an emerging data acquisition techn.pdfanjaniar7gallery
Laser Scanning
Laser scanning is an emerging data acquisition technology that has remarkably broadened its
application field and has been a serious competitor to other surveying techniques. Due to rapid
technological development, the increased accuracy of global positioning systems and improving
demands to even more accurate digital surface models, airborne laser scanning showed
significant development in the 1990s.
Somewhat later terrestrial laser scanning became a reasonable alternative method in many kinds
of applications that previously by ground based surveying or close-range photogrammetry.
1 Airborne laser scanning
Airborne laser scanning is an active remote sensing technology that is able to rapidly collect data
from huge areas. The resulted dataset can be the base of digital surface and elevation models.
Airborne laser scanning is often coupled with airborne imagery, therefore the point clouds and
images can be fused resulting enhanced quality 3D product.
The basic principle is as follows: the sensor emits a laser pulse through the terrain in a
predefined direction and receives the reflected laser beam. Knowing the speed of light, the
distance of the object can be calculated, see Figure 1.
Figure 1.: Time of flight laser range measurement [2]
Airborne LiDAR systems are composed by the following subsystems:
The components are shown in Figure 2
Figure 2.: Principle of airborne LiDAR [2]
2. Sensors, equipment
Sensors can be distinguished based on the scanning method, i.e. how the laser beam is directed
through the surface. The four most widely used sensor types are shown in Figure 4.2.3.
Figure .3: Scanning mechanisms [1]
As it is clearly seen in Figure 3, different kinds of mechanisms are applied by the different types
of sensors; each has its advantages and shortcomings, e.g. number of moving parts, type of
rotation etc. that lead to different kinds of error sources.
The capabilities (repetition rate, scan frequency, scan angle, point density) of the above scanners
are very similar; the main difference lies in the scanning pattern, as seen in Figure 4. The most
widely used oscillating mirror scanners produce the zigzag pattern. Spacing along the line
depends on the pulse rate and scanning frequency, while spacing along the flight direction
depends on the flying speed. To avoid too wide spacing of points along flight direction, LiDAR
flights are usually slower (e.g. at 60-80 m/sec) compared to that of photogrammetric flights
(even 120-160 m/sec). Careful planning of the measurement results in rather homogenous
density, however, due to technical and microelectronic reasons (regarding the operating
mechanism of the mirror, especially in case of oscillating mirrors), higher point density can be
observed at the edges of the scan swath. Previously, critics were addressed to the fixed optic
scanners, i.e. the parallel scan lines along the flight direction can miss sizeable objects, but
vendors successfully responded and modified the mechanis.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
LiDAR, short for light detection and ranging, is a remote sensing technology that uses light in the form of a pulsed laser to measure ranges (distances) to a target. A LiDAR sensor fires off beams of laser light and then measures how long it takes for the light to return to the sensor.
The costs of building and using LIDAR systems are coming down because it’s becoming easier and cheaper to collect more and more data, and LIDAR is being used in more applications.
At Techwave, we strive to be a world-class lighting technology provider. With that role comes the responsibility to be intimately familiar with newer applications and help our partners understand the real value in them.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
A hybrid gwr based height estimation method for building
1. A HYBRID GWR-BASED HEIGHT ESTIMATION METHOD FOR BUILDING
DETECTION IN URBAN ENVIRONMENTS
Xuebin Wei*, Xiaobai Yao
Department of Geography, University of Georgia, 210 Field Street, Athens, Georgia, USA 30602 – (xbwei, xyao)@uga.edu
Technical Commission II
KEY WORDS: Building Detection, GWR, Height Prediction, Aerial Photo, Sparse LiDAR Point, Urban Area
ABSTRACT:
LiDAR has become important data sources in urban modelling. Traditional methods of LiDAR data processing for building detection
require high spatial resolution data and sophisticated methods. The aerial photos, on the other hand, provide continuous spectral
information of buildings. But the segmentation of the aerial photos cannot distinguish between the road surfaces and the building roof.
This paper develops a geographically weighted regression (GWR)-based method to identify buildings. The method integrates
characteristics derived from the sparse LiDAR data and from aerial photos. In the GWR model, LiDAR data provide the height
information of spatial objects which is the dependent variable, while the brightness values from multiple bands of the aerial photo serve
as the independent variables. The proposed method can thus estimate the height at each pixel from values of its surrounding pixels
with consideration of the distances between the pixels and similarities between their brightness values. Clusters of contiguous pixels
with higher estimated height values distinguish themselves from surrounding roads or other surfaces. A case study is conducted to
evaluate the performance of the proposed method. It is found that the accuracy of the proposed hybrid method is better than those by
image classification of aerial photos along or by height extraction of LiDAR data alone. We argue that this simple and effective method
can be very useful for automatic detection of buildings in urban areas.
1. INTRODUCTION
Human activity patterns result in increased complexity and
spatial extent of built environments(You and Zhang, 2006).
Rapid urbanization leads to city expansions and sprawls, causing
land use intensification. Consequently, high density buildings
have been constructed in many city centers to accommodate the
demands for human activities. The heights and densities of
buildings also affect physical urban environmental conditions
such as wind, access of sunlight, interior temperatures, etc. (Yu
et al., 2010). Therefore, monitoring urban expansion and
measuring building height have been gaining much research
interests in urban planning and urban study.
Theoretically, ground survey can provide the most accurate
measurements of built environments, but it is labor and resource
intensive, as well as time consuming. Therefore, high-resolution
satellite images and aerial photos are widely adopted in urban
studies(Wu and Zhang, 2012). However, urban areas are spatially
heterogeneous, and thus image classification is complicated due
to variations in shapes and materials of human-made structures
(Hoalst-Pullen and Patterson, 2011). Different types of data are
hence integrated with aerial photos and satellite images to
overcome this problem. For instance, high resolution Synthetic
Aperture Radar (SAR) data have been fused with LiDAR data
and GIS data to generate 3D city models (Soergel et al., 2005).
SPOT images and InSAR data were integrated using a
classification and regression tree (CART) algorithm to map an
urban impervious surface(Jiang et al., 2009).
Although LiDAR data has become increasingly available, the
quality and quantity of the data are not always guaranteed. This
has constrained its utilization for building detection. However,
there are some unique characteristics of LiDAR data in an urban
environment from which we can glean some important
information to be integrated with other types of data for building
classification. For instance, the heights of a building top show
relatively regular patterns and they are highly locally
autocorrelated while globally heterogeneous. This study
proposes an effective method that integrates the sparse height
information of LiDAR data and spectral information from aerial
photos (or other remotely sensed images) in geographically
weighted regression (GWR) models. The established GWR
models can be used to automatically estimate building heights for
the entire study area. A case study is conducted to illustrate the
process and to evaluate the effectiveness of the approach. The
LiDAR data in this paper were collected at 2-meter point spacing
in Florida, USA. Such point density is insufficient for building
extraction. Therefore, an aerial photo was obtained for
assistance. The GWR models integrates characteristic variables
from both data sources to estimate the height at each pixel from
the height values of its nearby pixels based on their distances and
similarities of spectral characteristics. Clusters of contiguous
pixels with higher estimated height values distinguish themselves
from surrounding roads or other surfaces. It is found that the
hybrid GWR-based method produces better results comparison
with those of either the traditional image classification of aerial
photos or the classification of LiDAR data alone.
2. LITERATURE REVIEW
Light Detection and Range (LiDAR) data have become
increasingly popular in urban study due to its immediate
generation of 3D data, high accuracy and acquisition flexibility
(Stojanova et al., 2010). Mobile laser scanning data is utilized to
model trees for a 3D city model building (Rutzinger et al., 2011).
Support Vector Machine classification algorithm is applied for
building extractions (Malpica et al., 2013). Building contour
lines are generated based on slope parameters from TIN polygons
which are generated from LiDAR points (Alexander et al., 2009).
However, because acquisition of LiDAR data is expensive, such
data are only available for limited spatial coverage with data gaps
for some targeted areas or spatial features. Satellite images and
aerial photos are of relatively lower cost and broader spatial
coverage (Stojanova et al., 2010), and are thus often combined
with LiDAR data for building extraction. For instance, in You
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume II-2, 2014
ISPRS Technical Commission II Symposium, 6 – 8 October 2014, Toronto, Canada
This contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper.
doi:10.5194/isprsannals-II-2-23-2014 23
2. and Zhang (You and Zhang, 2006)’s research, the building edge
is identified by a Laplacian sharpening operator on CCD image
at first. The building contour is then extracted based on a bi-
directional projection histogram and further calibrated by a line
matching algorithm. LiDAR points and the building contours are
finally intersected to generate 3D building models. Kabolizade et
al (Kabolizade, Ebadi, and Ahmadi, 2010) improved the Gradient
Vector Flow (GVF) snake model to extract the building boundary
from an aerial photo image and the LiDAR data. A recursive
connected-component identification and indexing algorithm is
applied in Yu’s study to delineate the building boundary (Yu et
al., 2010). Other methods of building extraction have also been
used to extract buildings footprints and heights from LiDAR data
and aerial photos. Examples include regression analysis, image
classification, linear spectral mixture analysis (LSMA), artificial
neural network (ANH) and Object based image analysis (OBIA)
(Weng, 2012). Methods for evaluating results of building
extraction from the remotely sensed data have been developed
and discussed (Awrangjeb, Ravanbakhsh, and Fraser, 2010). In
addition to traditional pixel-based image classification, an
objected-oriented (OO) classification method following a fractal
net evolution approach is introduced to discriminate buildings,
vegetation and soil based on the spectral and shape heterogeneity
(Zhou and Troy, 2008).
3. METHODOLOGY
In this section, the proposed hybrid height estimation method is
introduced, illustrated with a case study. The method first
performs the traditional LiDAR data classification and the image
classification of air photos for building detection to serve as the
benchmark results. A GWR-based height prediction method is
then introduced to estimate the building heights. Characteristics
extracted from the sparse LiDAR points and the aerial photos are
used as explanatory variables in the GWR models. The resulting
heights provide basis for building detection.
3.1 Data Description
The LiDAR dataset collected in this paper is for Sarasota County,
Florida. It consists of 2 lifts of flight lines acquired in 2 sorties
using the Leica ALS40 sensor. The data was captured at a flying
height of 6,000 feet AMT with a scan rate of 13 Hz and a 25
degree field of view. LiDAR points were collected at
approximately 2-meter point spacing (Figure 1). An aerial photo
of the same study area with a 0.3 meter spatial resolution is
downloaded from the USGS website (Figure 2).
Figure 1. Sparse LiDAR Points in 3D View
Figure 2. Aerial Photo of the Study Area
3.2 LiDAR Data Classification
LiDAR points can be classified into bare earth, trees, buildings,
and other land cover types based on their height information and
number of returns. However, such classification normally
requires high quality LiDAR data, and employs sophisticated
methods demanding human interventions. For example, LiDAR
Analyst software provides good classification solutions
(Overwatch Systems, Ltd., 2013) which can produce efficient
results for building extraction from LiDAR data. But when the
density of data points are relatively low, the results can be
disappointing. Figure 3 demonstrates how the LiDAR Analyst
tool mistakenly classifies the buildings, trees and forests from the
LiDAR points with 2-meter point spacing. Because some small
apartments only reflected few LiDAR signals, they were
classified as trees or forests in the classification process. This
example suggests that low point density is insufficient for
building detection by LiDAR data only.
3.3 Image Classification
Image classification can recognize the image pixels with similar
brightness values into the same groups. An unsupervised
classification of the aerial photo is performed to extract the
buildings based on image data only. The classification algorithm
has assigned different class values for each pixel depending on
the similarity of brightness value from the red, green and blue
band. As shown in Figure 4, the trees, parts of the roads and the
shadows of buildings are classified from class 1 to class 6. The
building surfaces and the remaining roads are all classified from
class 7 to class 10 in the unsupervised classification. Such
segmentation cannot totally separate the roads and building
surface because they have identical brightness values in the
visible bands. Although the road segments and the buildings may
be further distinguished by applying hyperspectral images, the
acquisition of such high spatial resolution of the hyperspectral
images are extremely expensive and the image classification
parameters need calibration depending on the building types and
the road conditions.
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume II-2, 2014
ISPRS Technical Commission II Symposium, 6 – 8 October 2014, Toronto, Canada
This contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper.
doi:10.5194/isprsannals-II-2-23-2014 24
3. Figure 3. LiDAR Analyst Classification Result from the Sparse
LiDAR Points
3.4 The Proposed Hybrid GWR-Based Building Detection
Method
The Geographically Weighted Regression (GWR) is a family of
regression models in which the coefficients are allowed to vary
spatially. It can be considered a localized regression. The GWR
uses locations of sample points for a form of spatially weighted
least squares regression, in which the coefficients are determined
by examining the set of points within a well-defined
neighbourhood of each of the sample points (Smith, Longley, and
Goodchild, 2011). In this proposed method for building
detection, the underlying idea is that nearby locations with
similar spectral characteristics belongs to the same spatial object,
and thus may share the same height value.
The major advantage of the method lies is its utilization of unique
characteristics of each type of data. Spatial dependence and
spatial heterogeneity of the LiDAR data and the aerial photos are
considered. The proposed method utilizes both height
information from the LiDAR points and spectral characteristics
from the aerial photos. By integrating the LiDAR data and the
aerial photos, the final GWR model can predict height for each
pixel of the aerial photo data. The height values help to separate
the buildings from the road segments, and the spectral differences
help to distinguish buildings from trees and shadows. Detailed
steps are explained below.
Figure 4. Unsupervised Image Classification for Building
Extraction
3.4.1 Normalized DSM: A digital surface model (DSM) is
the foundation for the computation of building height from
LiDAR data. A Normalized DSM (nDSM), which contains
relative height information of objects above the ground (Yu et al.
, 2010), is constructed from the LiDAR points. The nDSM values
from the LiDAR points are considered the heights of spatial
features, e.g., height of trees or height of buildings. The
calculation is realized in LASTools, a free LiDAR processing
tool. The majority of object heights in the study area range from
0 to 39 meters. Extreme height values, which are usually caused
by canopies of trees as shown as red points in Figure 5, are
filtered in further calculation. The other height values located
within the building pixels are considered as building heights. The
building height values will serve as the dependent variable in the
following step of GWR-based modeling.
3.4.2 Data Conversion: In order to be further integrated with
the LiDAR point data, the aerial photo pixel values of the red,
blue and green band are assigned to the nearest LiDAR point
using the Extraction function in ArcGIS. Starting from the image
classification results of the traditional pixel-based method, the
pixels of selected classes (classes 7 and above in the case study)
are extracted as the building mask to exclude non-building pixels.
This masking and filtering process is crude and not absolutely
necessary but it can reduce the model computation load and thus
improve efficiency. The pixels belonging to both road segments
and buildings will be further separated based on the height from
the GWR prediction.
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume II-2, 2014
ISPRS Technical Commission II Symposium, 6 – 8 October 2014, Toronto, Canada
This contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper.
doi:10.5194/isprsannals-II-2-23-2014 25
4. Figure 5. Normalized DSM
3.4.3 The GWR Height Estimation Model: Since the
LiDAR points are far sparser than the pixels, some pixels have
obtained height values from the contained LiDAR points while
the majority of the pixels do not contain any LiDAR points at all.
However, as a building occupies a cluster of contiguous pixels
with similar brightness values, the GWR model is able to predict
the building height of pixels based on the brightness values of
their surrounding pixels.
The GWR model (Figure 6) establishes a locally varying
relationship between height values and spectral variables. Note
that all locations have values of spectral characteristics. The
height of each pixel is thus estimated by the brightness values of
its surrounding points depending on their distance and the
similarity of their brightness value. The threshold distance is
controlled by the bandwidth parameter in the GWR model. The
greater the bandwidth is, the further the pixel will be influenced
by its neighbors. The brightness values in the red, green and blue
bands are set as the independent variables. The classification type
in the mask layer, i.e., the class labels from the image
classification, is also tested as a possible predictive variable.
When used, it is treated as ordinal variables rather than nominal
variables in this study assuming that the pixels with a greater
class label are more likely belonging to the buildings within a
smaller area. Therefore, values from the masks also participant
in the GWR regression. The coefficients of the GWR model is
locally determined. Because most buildings have widths less than
30 meters in this case study area, the maximal bandwidth in the
GWR model is set to 30 meters. As each pixel has its own set of
coefficients for the GWR mode, we call the entire spatial
coverage of coefficients the coefficient raster. The coefficient
raster is calibrated with the original photos in the Raster Algebra
function to produce the predicted height raster. A height filter is
then applied to extract the pixels belonging to the buildings. The
building raster can also be converted to polygons for specific
purposes. While height is the dependent variable in the GWR-
based modeling process, Table 1 shows the independent
variables and the parameter choice. A number of GWR model
structures are tested with different choices of predictive variables
and parameter settings.
Independent Variable Bandwidth
Predict 1 Mask 30 meter
Predict 2 Red 30 meter
Predict 3 Red; Green; Blue 30 meter
Predict 4 Red; Mask 30 meter
Predict 5 Red; Green; Blue; Mask 30 meter
Predict 6 Red; Green; Blue; Mask 25 meter
Predict 7 Red; Green; Blue; Mask 20 meter
Predict 8 Red; Green; Blue; Mask 15 meter
Predict 9 Red; Green; Blue; Mask 10 meter
Table 1. Parameters in GWR Model
Figure 6. Flowchart of the Proposed Method
Aerial photos
Pixel value of
visible bands
Mask value from
image classification
LiDAR points
Extraction
Height
Pixel value
GWR
Independent variables
Dependent variable
Predicted
height of points
Coefficient
raster
Raster algebra
Predicted height
of pixel
Height filter
Detected buildings
Raster to
polygon
Bandwidth
Predict locations
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume II-2, 2014
ISPRS Technical Commission II Symposium, 6 – 8 October 2014, Toronto, Canada
This contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper.
doi:10.5194/isprsannals-II-2-23-2014 26
5. 4. RESULT AND EVALUATION
The model performance statistics are reported in Table 2. It
shows that with more independent variables and shorter neighbor
bandwidths, the GWR model can explain more variations of the
heights and thus generates good fit. The residual is reduced and
the Akaike Information Criterion (AIC) is improved in Table 2
when more variables are employed and a shorter bandwidth is
applied. The prediction with the mask value only (Predict1)
produce similar results as those by single band value (Predict2)
when the same bandwidth distance is specified.
The R square of Predict 9 reaches the highest when the height of
the pixel is predicted based on all four of the independent
variables and the neighborhood bandwidth is only10 meters. This
is because the pixels located at a shorter distance will have a
higher similarity in brightness value and higher probability that
they belong to the same building surface, thus gaining similar
height values.
However, as the neighborhood bandwidth is reduced, the
effective neighbors with known building heights are decreased
and more extreme height values are predicted by the model. A
sensitivity analysis was carried out to examine the role of
bandwidth size and the choice of independent variables. Results
are shown in Figure 7. For example, in the Predict 1 model which
has only one explanatory variable and a bandwidth of 30 meters,
the predicted height ranges from 0 to 39 meters, and no extreme
height value is beyond 50 meters. As shown in Figure 7(a),
adding more explanatory variables leads to more extreme values
of predicted heights. The highest value is greater than 100 meters
by the Predict 5 model. Similarly, shorter bandwidth size also
cause more extreme values to appear, as shown in Figure 7(b).
Some extreme height values even approach as high as 400
meters. Those extreme height values will become the noise pixels
and should be filtered in the following building classification and
3D model construction.
Due to the page limitation, only the result from Predict 5 is
presented in this paper. Since a building is normally higher than
3 meters, those pixels whose predicted height is less than 3
meters are filtered (Figure 8). A 3D model of the buildings is
constructed (Figure 9). Comparing with the unsupervised
classification results (Figure 4), most buildings are successfully
distinguished from the road segments by the GWR prediction
(Figure 8). In addition, the different structure of the building
roofs can be distinguished from the 3D model (Figure 9).
(a). Predicted heights by models with increasing number of
independent variables (from left to right) and the same
bandwidth size
(b). Predicted heights by models with decreasing bandwidth
size (from left to right) and the constant independent
variables
Figure 7 Predicted Height Values from Different GWR Model Structures
Residual Squares Effective Number AIC R2 Adjusted R2
Predict1 105,992.80 456.61 14,441.34 0.56 0.44
Predict2 105,839.68 454.55 14,449.69 0.57 0.44
Predict3 21,120.59 134.73 2,570.61 0.60 0.32
Predict4 76,889.58 444.96 10,256.27 0.52 0.29
Predict5 4,717.33 26.86 479.93 0.61 0.25
Predict6 2,776.48 24.83 397.85 0.67 0.28
Predict7 1,977.75 22.99 346.90 0.66 0.12
Predict8 513.78 15.36 257.03 0.78 0.11
Predict9 91.49 9.53 670.23 0.94 0.61
Table 2. Model Performance for Different Parameters in Prediction
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume II-2, 2014
ISPRS Technical Commission II Symposium, 6 – 8 October 2014, Toronto, Canada
This contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper.
doi:10.5194/isprsannals-II-2-23-2014 27
6. Figure 8. The Predicted Building Height
Figure 9. 3D View of the Detected Buildings
Some pixels of road segments still fail to be filtered since their
colors are identical to that of the nearby building surface and they
are located close to a tree that may cause a higher predicted
height. Those pixels could be further calibrated if a building area
criterion is applied or a near infrared band image is adopted.
The results of the detected buildings from the LiDAR data only
and those from the GWR model are compared with the manually
interpreted results to evaluate their accuracy. As shown in Table
3, GWR-method outperforms the LiDAR data classification in
two aspects. First, its overall accuracy (dividing the total correct
pixels by the total number of pixels (Congalton, 1991)) is slightly
better although both methods present pretty good overall
accuracy. Second and most dramatically, results from the
proposed GWR-based method shows a much higher agreement
with the ground truth data as suggested by the much higher
Kappa coefficient. This higher agreement is also confirmation by
a visual comparison as shown in Figure 10. Therefore, in
conclusion, the GWR-based height prediction method produces
a better result of the building detection than the classification of
LiDAR data only. The accuracy evaluation for the image
classification is not conducted because the image classification
cannot distinguish the road segments from the building at all.
Overall
Accuracy
Kappa
Coefficient
GWR Result 0.87 0.64
LiDAR Data Only 0.81 0.11
Table 3. Accuracy Evaluation
Figure 10. Predicted Buildings by Different Methods
5. CONCLUSION
LiDAR data classification for building extraction requires high-
quality data and sophisticated methods. These have constrained
the utilization of sparse LiDAR points for building detection in
urban environments. Aerial photo classification, on the other
hand, cannot distinguish road surfaces from building roofs. This
paper presents a hybrid GWR-based height estimation method
for building detection from sparse LiDAR points and aerial
photos.
The final result shows that the proposed method clearly
outperforms those from traditional methods using LiDAR data or
aerial photos alone. We believe that it is because the method
correctly takes advantage of the hidden spatial characteristics of
the available data and explicitly consider them in the method
development. Such spatial characteristics include spatial
dependence, spatial heterogeneity, and spatial continuity within
the datasets. We conclude that the hybrid GWR-based method is
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume II-2, 2014
ISPRS Technical Commission II Symposium, 6 – 8 October 2014, Toronto, Canada
This contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper.
doi:10.5194/isprsannals-II-2-23-2014 28
7. a very promising method and can be applied widely when a fast
and automatic processing of building detection needed.
However, we do recognize that some road segments are still
mistakenly estimated as buildings due to their similarity in
spectral characteristics to the nearby building surface and the
intervention of surrounding trees. Such ‘noises’ can be further
filtered by adopting near inferred images or introducing building
area criteria. Other methods for overcoming the problem can be
further explored.
The GWR-based building detection method is simple and
effective. Comparing with the traditional building extraction
method, the GWR model require less human intervention and
produce outputs with equivalent or even higher accuracy. The
GWR-based method can be applied for automatic detection of
buildings where high quality data are unavailable or a simple
method is demanded.
6. REFERENCE
Alexander, C., S. Smith-Voysey, C. Jarvis, and K. Tansey. 2009.
Integrating building footprints and LiDAR elevation data to
classify roof structures and visualise buildings. Computers,
Environment and Urban Systems 33 (4):285–292.
Awrangjeb, M., M. Ravanbakhsh, and C. S. Fraser. 2010.
Automatic detection of residential buildings using LIDAR data
and multispectral imagery. ISPRS Journal of Photogrammetry
and Remote Sensing 65:457–467.
Congalton, R. G. 1991. A review of assessing the accuracy of
classifications of remotely sensed data. Remote sensing of
environment 37 (1):35–46.
Hoalst-Pullen, N., and M. W. Patterson. 2011. Applications and
Trends of Remote Sensing in Professional Urban Planning.
Geography Compass 5 (5):249–261.
Jiang, L., M. Liao, H. Lin, and L. Yang. 2009. Synergistic use of
optical and InSAR data for urban impervious surface mapping: a
case study in Hong Kong. International Journal of Remote
Sensing 30 (11):2781–2796.
Kabolizade, M., H. Ebadi, and S. Ahmadi. 2010. An improved
snake model for automatic extraction of buildings from urban
aerial images and LiDAR data. Computers, Environment and
Urban Systems 34 (5):435–441.
Malpica, J. A., M. C. Alonso, F. Papi, A. Arozarena, and A. M.
D. Agirre. 2013. Change detection of buildings from satellite
imagery and lidar data. International Journal of Remote Sensing
34 (5):1652–1675.
Overwatch Systems, Ltd. 2013. LIDAR Analyst.
https://www.overwatch.com/products/geospatial/lidar_analyst
(last accessed 25 April 2014).
Rutzinger, M., A. K. Pratihast, S. J. Oude Elberink, and G.
Vosselman. 2011. Tree modelling from mobile laser scanning
data-sets. Photogrammetric Record 26 (135):361–372.
Smith, M. de, P. Longley, and M. Goodchild. 2011. Geospatial
Analysis: A comprehensive guide 3 edition. The Winchelsea
Press.
Soergel, U., K. Schulz, U. Thoennessen, and U. Stilla. 2005.
Integration of 3D data in SAR mission planning and image
interpretation in urban areas. Information Fusion 6 (4):301–310.
Stojanova, D., P. Panov, V. Gjorgjioski, A. Kobler, and S.
Džeroski. 2010. Estimating vegetation height and canopy cover
from remotely sensed data with machine learning. Ecological
Informatics 5 (4):256–266.
Weng, Q. 2012. Remote sensing of impervious surfaces in the
urban areas: Requirements, methods, and trends. Remote Sensing
of Environment 117 (0):34–49.
Wu, K., and H. Zhang. 2012. Land use dynamics, built-up land
expansion patterns, and driving forces analysis of the fast-
growing Hangzhou metropolitan area, eastern China (1978–
2008). Applied Geography 34 (0):137–145.
You, H., and S. Zhang. 2006. 3D building reconstruction from
aerial CCD image and sparse laser sample data. Optics and
Lasers in Engineering 44 (6):555–566.
Yu, B., H. Liu, J. Wu, Y. Hu, and L. Zhang. 2010. Automated
derivation of urban building density information using airborne
LiDAR data and object-based method. Landscape and Urban
Planning 98 (3–4):210–219.
Zhou, W., and A. Troy. 2008. An object-oriented approach for
analysing and characterizing urban landscape at the parcel level.
International Journal of Remote Sensing 29 (11):3119–3135.
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume II-2, 2014
ISPRS Technical Commission II Symposium, 6 – 8 October 2014, Toronto, Canada
This contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper.
doi:10.5194/isprsannals-II-2-23-2014 29