1. The document discusses three unmixing strategies - FCLS-R, FCLS-E, and TRUST - to estimate temperature, emissivity, and abundance from hyperspectral thermal imagery of mixed pixels.
2. FCLS-R performs linear unmixing on the measured radiance, FCLS-E unmixes estimated emissivities, and TRUST estimates subpixel temperatures while accounting for a range of possible emissivity values.
3. Results on synthetic and real data show that the strategies can accurately estimate parameters for mixtures with temperatures differing by less than 15K and sufficient abundance values, helping to characterize heterogeneous scenes.
1. This study developed a neural network technique to detect volcanic ash and retrieve ash mass from MODIS data during the 2010 Eyjafjallajokull eruption in Iceland. (2) The neural networks were able to accurately detect ash and estimate ash mass, performing comparably or better than traditional brightness temperature difference methods. (3) The trained neural networks were also able to be applied to a 2011 eruption in Iceland, demonstrating the potential for neural networks to rapidly process MODIS data during volcanic eruptions.
Dengue Vector Population Forecasting Using Multisource Earth Observation Prod...University of Pavia
The document describes a methodology for forecasting dengue vector populations one week ahead using multi-source earth observation data and recurrent neural networks. The methodology clusters mosquito population ground truth data using k-means clustering. It then trains an encoder-decoder LSTM neural network on time series data of earth observation features means for each cluster. The model is tested against random forest and k-nearest neighbor models on data from two locations in Brazil. Results show the LSTM model more accurately follows the highest and lowest observed mosquito population values compared to the other models.
This study investigates the scale effect of the relationship between the normalized difference vegetation index (NDVI) and land surface temperature (T) and improves a thermal sharpening method called TsHARP. The study finds that the slope of the NDVI-T relationship increases more significantly with spatial extent than with spatial resolution alone. An improved TsHARP method is developed that establishes the NDVI-T regression relationship based on the spatial extent of individual thermal pixels, rather than the entire image extent. Testing shows the improved method produces a sharper and more accurate thermal sharpening result compared to the original TsHARP method.
Photoacoustic tomography based on the application of virtual detectorsIAEME Publication
This document discusses using virtual detectors to improve photoacoustic tomography (PAT) image reconstruction when full scanning data is unavailable. It proposes interpolation and compressed sensing methods to generate virtual detector data and increase the number of measurements. Simulation results show applying these methods to preprocessed photoacoustic data significantly improves the peak signal-to-noise ratio of reconstructed images compared to direct reconstruction with limited detectors. Dictionary-based compressed sensing provides the best performance by learning an over-complete dictionary to sparsify signals. The methods allow better quality PAT imaging when hardware and spatial constraints limit actual detector positions and sampling angles.
Investigation of Chaotic-Type Features in Hyperspectral Satellite Datacsandit
This document analyzes the use of Lyapunov exponents to determine chaotic structure in hyperspectral satellite data. It investigates an EO-1 Hyperion hyperspectral image of a mixed forest site in Turkey. Lyapunov exponents are calculated from reconstructed phase spaces of spectral signals for different object classes. Positive and negative Lyapunov exponents indicate chaotic behavior is present. The results demonstrate Lyapunov exponents can be used as discriminative features to improve hyperspectral image classification accuracy by capturing the chaotic structures in the data.
This document summarizes a study that developed a Web ERosivity Module (WERM) to calculate rainfall erosivity (R factor) values across South Korea. The study calculated annual R factors for 75 stations using the WERM tool and hourly rainfall data from 1999-2015. Spatial interpolation was used to create an R factor map of South Korea. Regression equations were also developed to estimate monthly R factors based on monthly rainfall and order of month for several locations with R^2 values ranging from 0.75 to 0.92. Comparison of R factors from WERM and the Ministry of Environment showed differences of up to 46%.
This document presents a Bayesian methodology for retrieving soil parameters like moisture from SAR images. It begins by introducing the importance of soil moisture monitoring and the opportunity provided by Argentina's upcoming SAOCOM SAR satellite. It then discusses limitations of traditional retrieval models in accounting for speckle noise and terrain heterogeneity. The document proposes a Bayesian approach using a multiplicative speckle model within a likelihood function to estimate soil moisture and roughness from SAR backscatter measurements. Simulation results show the Bayesian method retrieves soil moisture across the full measurement space and provides error estimates, with improved precision at higher numbers of looks.
1. This study developed a neural network technique to detect volcanic ash and retrieve ash mass from MODIS data during the 2010 Eyjafjallajokull eruption in Iceland. (2) The neural networks were able to accurately detect ash and estimate ash mass, performing comparably or better than traditional brightness temperature difference methods. (3) The trained neural networks were also able to be applied to a 2011 eruption in Iceland, demonstrating the potential for neural networks to rapidly process MODIS data during volcanic eruptions.
Dengue Vector Population Forecasting Using Multisource Earth Observation Prod...University of Pavia
The document describes a methodology for forecasting dengue vector populations one week ahead using multi-source earth observation data and recurrent neural networks. The methodology clusters mosquito population ground truth data using k-means clustering. It then trains an encoder-decoder LSTM neural network on time series data of earth observation features means for each cluster. The model is tested against random forest and k-nearest neighbor models on data from two locations in Brazil. Results show the LSTM model more accurately follows the highest and lowest observed mosquito population values compared to the other models.
This study investigates the scale effect of the relationship between the normalized difference vegetation index (NDVI) and land surface temperature (T) and improves a thermal sharpening method called TsHARP. The study finds that the slope of the NDVI-T relationship increases more significantly with spatial extent than with spatial resolution alone. An improved TsHARP method is developed that establishes the NDVI-T regression relationship based on the spatial extent of individual thermal pixels, rather than the entire image extent. Testing shows the improved method produces a sharper and more accurate thermal sharpening result compared to the original TsHARP method.
Photoacoustic tomography based on the application of virtual detectorsIAEME Publication
This document discusses using virtual detectors to improve photoacoustic tomography (PAT) image reconstruction when full scanning data is unavailable. It proposes interpolation and compressed sensing methods to generate virtual detector data and increase the number of measurements. Simulation results show applying these methods to preprocessed photoacoustic data significantly improves the peak signal-to-noise ratio of reconstructed images compared to direct reconstruction with limited detectors. Dictionary-based compressed sensing provides the best performance by learning an over-complete dictionary to sparsify signals. The methods allow better quality PAT imaging when hardware and spatial constraints limit actual detector positions and sampling angles.
Investigation of Chaotic-Type Features in Hyperspectral Satellite Datacsandit
This document analyzes the use of Lyapunov exponents to determine chaotic structure in hyperspectral satellite data. It investigates an EO-1 Hyperion hyperspectral image of a mixed forest site in Turkey. Lyapunov exponents are calculated from reconstructed phase spaces of spectral signals for different object classes. Positive and negative Lyapunov exponents indicate chaotic behavior is present. The results demonstrate Lyapunov exponents can be used as discriminative features to improve hyperspectral image classification accuracy by capturing the chaotic structures in the data.
This document summarizes a study that developed a Web ERosivity Module (WERM) to calculate rainfall erosivity (R factor) values across South Korea. The study calculated annual R factors for 75 stations using the WERM tool and hourly rainfall data from 1999-2015. Spatial interpolation was used to create an R factor map of South Korea. Regression equations were also developed to estimate monthly R factors based on monthly rainfall and order of month for several locations with R^2 values ranging from 0.75 to 0.92. Comparison of R factors from WERM and the Ministry of Environment showed differences of up to 46%.
This document presents a Bayesian methodology for retrieving soil parameters like moisture from SAR images. It begins by introducing the importance of soil moisture monitoring and the opportunity provided by Argentina's upcoming SAOCOM SAR satellite. It then discusses limitations of traditional retrieval models in accounting for speckle noise and terrain heterogeneity. The document proposes a Bayesian approach using a multiplicative speckle model within a likelihood function to estimate soil moisture and roughness from SAR backscatter measurements. Simulation results show the Bayesian method retrieves soil moisture across the full measurement space and provides error estimates, with improved precision at higher numbers of looks.
Directional Analysis and Filtering for Dust Storm detection in NOAA-AVHRR Ima...Pioneer Natural Resources
This document proposes techniques for detecting dust storms and determining their direction of transport using NOAA-AVHRR satellite imagery. It introduces a two-part approach using image processing algorithms: 1) A visualization technique uses filters, edge detectors and classification to locate dust sources. 2) An automation technique performs power spectrum analysis to detect dust storm direction by analyzing texture orientation in image blocks. The goal is to automatically detect and track dust storms for applications like hazard monitoring.
Vegetation monitoring using gpm data over mongoliaGeoMedeelel
Vegetation mapping was conducted over Mongolian land using GPM/DPR satellite data. Backscatter signals from the Ku and Ka band radar were analyzed for different land cover types from 2014-2018. The signals showed incidence angle dependency and seasonal variations that correlated with land cover. Forest had higher backscatter than grass, while desert was lowest. Combining the Ku and Ka band signals using PCA clearly distinguished forest, grass and desert. This new remote sensing method using precipitation radar provides an effective way to continuously map and monitor vegetation dynamics for detecting land degradation.
The document presents a method for registering geophysical images from different modalities to increase the information obtained. It uses mutual information as a similarity measure to find the translation between image pairs. It then introduces an algorithm that applies local distortions through a windowing approach while maintaining grid continuity, improving upon simple rigid registration. The method is tested on archaeological image datasets, successfully registering image pairs in agreement with their known coordinates and increasing their mutual information.
This study implements a Bayesian statistical framework to calibrate the SCOPE process-based simulator for simulating gross primary production (GPP) and top-of-canopy reflectance at a spruce flux tower site in the Czech Republic. Markov chain Monte Carlo simulation is used to quantify the uncertainty in SCOPE input parameters by comparing simulated and measured reflectance and GPP data. The results show the posterior parameter distributions have lower uncertainty than the prior distributions. Simulated half-hourly GPP over the growing season using maximum a posteriori parameter estimates matches the measured data. Future work will estimate seasonal parameter variations to improve GPP simulation accuracy.
This document summarizes the analysis of the Bd/s → μ+μ- decay process conducted by the ATLAS collaboration. The analysis used data from LHC Run 1 taken between 2011-2012. Key aspects included a blind analysis technique, multivariate classifiers to suppress backgrounds, and a relative branching ratio measurement using the B± → J/ψK± decay as a reference channel. After applying selection cuts and classifiers, the expected yield of Bd/s → μ+μ- signal events was 41 Bs and 5 Bd decays. However, the maximum likelihood fit of the dimuon mass spectrum found lower-than-expected yields of 16 ± 12 Bs and -11 ± 9 Bd decays, corresponding to a significance of 1
The document describes a new regression model developed to estimate global solar radiation using artificial neural networks. The model was developed based on the Angstrom-Prescott model and used only latitude and altitude as inputs to estimate monthly average daily global solar radiation. The neural network model with 10 hidden neurons was trained on data from 4 locations in North India. The model estimated regression coefficients a ranging from 0.209 to 0.222 and b ranging from 0.253 to 0.407. Validation tests showed good agreement with actual solar radiation values.
This document proposes a new Bayesian algorithm called MANIPOP for 3D reconstruction from single-photon Lidar data that accounts for multiple surfaces per pixel, broadening of the instrumental response function, and highly scattering media. The algorithm models each return as a point in 3D space and uses a reversible jump MCMC approach to estimate the number and positions of points. Experiments reconstructing long range building data and underwater pipe data demonstrate the algorithm can handle broadening and scattering with negligible increase in runtime compared to other methods. Future work will focus on real-time reconstruction, multiple views, and multispectral Lidar.
Edge detection algorithm based on quantum superposition principle and photons...IJECEIAES
The detection of object edges in images is a crucial step employed in a vast amount of computer vision applications, for which a series of different algorithms has been developed in the last decades. This paper proposes a new edge detection method based on quantum information, which is achieved in two main steps: (i) an image enhancement stage that employs the quantum superposition law and (ii) an edge detection stage based on the probability of photon arrival to the camera sensor. The proposed method has been tested on synthetic and real images devoted to agriculture applications, where Fram & Deutsh criterion has been adopted to evaluate its performance. The results show that the proposed method gives better results in terms of detection quality and computation time compared to classical edge detection algorithms such as Sobel, Kayyali, Canny and a more recent algorithm based on Shannon entropy.
This document summarizes a study that used satellite imagery to analyze land surface temperatures in recently developed areas of Stratford, London. The study aimed to identify areas affected by recent construction, generate calibrated land surface temperature models from satellite data, and identify problematic hotspot areas through spatial analysis and field measurements. Results found higher temperatures near the Westfield shopping center, Old Stratford station, and along major roads, likely due to anthropogenic heat sources. Microclimate modeling of an area in East Olympic Village showed influence of building materials, shade, and wind on surface temperatures. The study fulfilled its objectives but future work could incorporate additional data sources and validation measurements.
The document discusses integrating support vector machines (SVMs) and Markov random fields (MRFs) for remote sensing image classification. SVMs are good at identifying optimal discriminant hypersurfaces but do not consider context between samples. The paper aims to integrate SVMs and MRFs to allow for contextual classification. A novel classifier is proposed that reformulates the MRF minimum-energy decision rule as an SVM discriminant function with a "contextual kernel." Experimental results on real remote sensing datasets show the proposed method provides significantly more accurate classifications compared to a standard noncontextual SVM.
Estimation of land surface temperature of dindigul district using landsat 8 dataeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Backtracking Algorithm for Single-Axis Solar Trackers installed in a sloping ...IJERA Editor
In this paper we present a backtracking algorithm that improves the energy production of a single-axis solar
tracker by reducing the shadow caused by neighboring panels. Moreover, the proposed algorithm can operate in
any field slope avoiding the necessity of correcting the field slope where the solar tracker is placed. This is an
important feature once it will reduced the time and the manpower during the solar tracker setup. The results
have shown that the algorithm presents a similar performance comparing to similar algorithms that were
designed only for horizontal fields.
The document summarizes a study that evaluated EUMETSAT's Multi-sensor Precipitation Estimate (MPE) products from Meteosat-8 and Meteosat-9 satellites in comparison with ground-based radar data over Northwestern Europe. Categorical and continuous verification statistics were used to assess differences in spatial distribution and rainfall values between the MPE products and radar data. The results showed that MPE from Meteosat-9 had higher accuracy scores and was better at estimating rainfall values, though both products tended to overestimate during heavy rainfall events. Recommendations included further validation of MPE products over different regions and timescales.
This document presents a new method for detecting land cover changes using RADARSAT-2 polarimetric SAR (PolSAR) images. The method integrates change vector analysis, post-classification comparison, and object-oriented image analysis. It was tested on PolSAR images of an area in China that experiences illegal land development. The proposed method achieved higher accuracy than other techniques in determining various types of land cover changes, such as barren land converting to crops or built-up areas. The use of change vector analysis, post-classification comparison, and object-oriented analysis helped reduce false alarms from classification errors and environmental changes.
The Quantum Theory Group at the University of Glasgow conducts research in several areas including quantum optics, quantum information, foundations of quantum mechanics, and light-matter interactions. The group has academic staff, research fellows, and PhD students. Specific research topics include boson sampling, quantum state discrimination, quantum measurement, open quantum systems, chiral molecules, optical angular momentum, and optical forces.
This document summarizes work from an optimization subgroup in remote sensing. It discusses three main topics: 1) defining optimization problems and algorithms in remote sensing, 2) the role of optimization in remote sensing retrievals, and 3) potential projects focused on improving temperature and humidity profile retrievals from satellite data and intersatellite calibration. The subgroup explored using neural networks and Gaussian processes to develop atmospheric temperature and humidity profiles from HIRS satellite data.
This document presents a new algorithm called CRT3D that can perform real-time 3D color reconstruction from single-photon lidar data. It achieves this by using a spectral subsampling approach that measures only a single wavelength per pixel, reducing data size while still enabling color reconstructions. The key steps are to denoise the point cloud using algebraic point set surfaces, denoise intensities using bilateral filtering, and denoise backgrounds using Wiener filtering. Experiments show CRT3D achieves real-time performance while producing more accurate color reconstructions compared to previous methods.
The document describes GRASP, a versatile algorithm for characterizing atmospheric properties from remote sensing observations. GRASP can retrieve a variety of properties including aerosol optical thickness, single scattering albedo, vertical aerosol profiles, and surface reflectance from sensors on satellites, ground-based radiometers, and aircraft. It is based on generalized principles to develop a rigorous, efficient, and accessible algorithm. GRASP uses a statistical optimization fitting approach and flexible forward model to simulate observations and retrieve multiple parameters simultaneously. It is being applied to process data from various satellite instruments to improve aerosol retrievals over land and develop synergistic retrievals using non-coincident observations.
I will describe a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious mean measurement-model parameterization, we first rewrite the measurement equation by changing the integral variable from photon energy to mass attenuation, which allows us to combine the variations brought by the unknown incident spectrum and mass attenuation into a single unknown mass-attenuation spectrum function; the resulting measurement equation has the Laplace integral form. The mass-attenuation spectrum is then expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image. This algorithm alternates between a Nesterov’s proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden–Fletcher–Goldfarb–Shanno with box constraints (L- BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density-map NPG steps, we apply a step-size selection scheme that accounts for varying local Lipschitz constants of the objective function. I will discuss the biconvexity of the penalized NLL function and outline preliminary results on convergence of PG-BFGS schemes. Finally, I will present real X-ray CT reconstruction examples that demonstrate the performance of the proposed scheme.
The document summarizes research on using sparse regression techniques for hyperspectral image unmixing. It introduces the linear mixing model and describes algorithms like orthogonal matching pursuit (OMP) and basis pursuit (BP) that find sparse solutions. The techniques are applied to simulated and real hyperspectral data. Results show that introducing new regularizers like total variation and group lasso can improve accuracy over l1 regularization alone by better handling the high coherence between spectral library elements.
Hyperspectral unmixing using novel conversion model.pptgrssieee
The document presents a novel hyperspectral unmixing approach called uccm-SVM that converts the abundance quantification problem into a classification problem using support vector machines. The approach is tested on both simulated and real hyperspectral images and is shown to outperform traditional mean-based techniques like FCLS in terms of accuracy while having lower computational costs for smaller training set sizes. Future work to improve the method includes enhancing performance while reducing computation for larger training sets.
Directional Analysis and Filtering for Dust Storm detection in NOAA-AVHRR Ima...Pioneer Natural Resources
This document proposes techniques for detecting dust storms and determining their direction of transport using NOAA-AVHRR satellite imagery. It introduces a two-part approach using image processing algorithms: 1) A visualization technique uses filters, edge detectors and classification to locate dust sources. 2) An automation technique performs power spectrum analysis to detect dust storm direction by analyzing texture orientation in image blocks. The goal is to automatically detect and track dust storms for applications like hazard monitoring.
Vegetation monitoring using gpm data over mongoliaGeoMedeelel
Vegetation mapping was conducted over Mongolian land using GPM/DPR satellite data. Backscatter signals from the Ku and Ka band radar were analyzed for different land cover types from 2014-2018. The signals showed incidence angle dependency and seasonal variations that correlated with land cover. Forest had higher backscatter than grass, while desert was lowest. Combining the Ku and Ka band signals using PCA clearly distinguished forest, grass and desert. This new remote sensing method using precipitation radar provides an effective way to continuously map and monitor vegetation dynamics for detecting land degradation.
The document presents a method for registering geophysical images from different modalities to increase the information obtained. It uses mutual information as a similarity measure to find the translation between image pairs. It then introduces an algorithm that applies local distortions through a windowing approach while maintaining grid continuity, improving upon simple rigid registration. The method is tested on archaeological image datasets, successfully registering image pairs in agreement with their known coordinates and increasing their mutual information.
This study implements a Bayesian statistical framework to calibrate the SCOPE process-based simulator for simulating gross primary production (GPP) and top-of-canopy reflectance at a spruce flux tower site in the Czech Republic. Markov chain Monte Carlo simulation is used to quantify the uncertainty in SCOPE input parameters by comparing simulated and measured reflectance and GPP data. The results show the posterior parameter distributions have lower uncertainty than the prior distributions. Simulated half-hourly GPP over the growing season using maximum a posteriori parameter estimates matches the measured data. Future work will estimate seasonal parameter variations to improve GPP simulation accuracy.
This document summarizes the analysis of the Bd/s → μ+μ- decay process conducted by the ATLAS collaboration. The analysis used data from LHC Run 1 taken between 2011-2012. Key aspects included a blind analysis technique, multivariate classifiers to suppress backgrounds, and a relative branching ratio measurement using the B± → J/ψK± decay as a reference channel. After applying selection cuts and classifiers, the expected yield of Bd/s → μ+μ- signal events was 41 Bs and 5 Bd decays. However, the maximum likelihood fit of the dimuon mass spectrum found lower-than-expected yields of 16 ± 12 Bs and -11 ± 9 Bd decays, corresponding to a significance of 1
The document describes a new regression model developed to estimate global solar radiation using artificial neural networks. The model was developed based on the Angstrom-Prescott model and used only latitude and altitude as inputs to estimate monthly average daily global solar radiation. The neural network model with 10 hidden neurons was trained on data from 4 locations in North India. The model estimated regression coefficients a ranging from 0.209 to 0.222 and b ranging from 0.253 to 0.407. Validation tests showed good agreement with actual solar radiation values.
This document proposes a new Bayesian algorithm called MANIPOP for 3D reconstruction from single-photon Lidar data that accounts for multiple surfaces per pixel, broadening of the instrumental response function, and highly scattering media. The algorithm models each return as a point in 3D space and uses a reversible jump MCMC approach to estimate the number and positions of points. Experiments reconstructing long range building data and underwater pipe data demonstrate the algorithm can handle broadening and scattering with negligible increase in runtime compared to other methods. Future work will focus on real-time reconstruction, multiple views, and multispectral Lidar.
Edge detection algorithm based on quantum superposition principle and photons...IJECEIAES
The detection of object edges in images is a crucial step employed in a vast amount of computer vision applications, for which a series of different algorithms has been developed in the last decades. This paper proposes a new edge detection method based on quantum information, which is achieved in two main steps: (i) an image enhancement stage that employs the quantum superposition law and (ii) an edge detection stage based on the probability of photon arrival to the camera sensor. The proposed method has been tested on synthetic and real images devoted to agriculture applications, where Fram & Deutsh criterion has been adopted to evaluate its performance. The results show that the proposed method gives better results in terms of detection quality and computation time compared to classical edge detection algorithms such as Sobel, Kayyali, Canny and a more recent algorithm based on Shannon entropy.
This document summarizes a study that used satellite imagery to analyze land surface temperatures in recently developed areas of Stratford, London. The study aimed to identify areas affected by recent construction, generate calibrated land surface temperature models from satellite data, and identify problematic hotspot areas through spatial analysis and field measurements. Results found higher temperatures near the Westfield shopping center, Old Stratford station, and along major roads, likely due to anthropogenic heat sources. Microclimate modeling of an area in East Olympic Village showed influence of building materials, shade, and wind on surface temperatures. The study fulfilled its objectives but future work could incorporate additional data sources and validation measurements.
The document discusses integrating support vector machines (SVMs) and Markov random fields (MRFs) for remote sensing image classification. SVMs are good at identifying optimal discriminant hypersurfaces but do not consider context between samples. The paper aims to integrate SVMs and MRFs to allow for contextual classification. A novel classifier is proposed that reformulates the MRF minimum-energy decision rule as an SVM discriminant function with a "contextual kernel." Experimental results on real remote sensing datasets show the proposed method provides significantly more accurate classifications compared to a standard noncontextual SVM.
Estimation of land surface temperature of dindigul district using landsat 8 dataeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Backtracking Algorithm for Single-Axis Solar Trackers installed in a sloping ...IJERA Editor
In this paper we present a backtracking algorithm that improves the energy production of a single-axis solar
tracker by reducing the shadow caused by neighboring panels. Moreover, the proposed algorithm can operate in
any field slope avoiding the necessity of correcting the field slope where the solar tracker is placed. This is an
important feature once it will reduced the time and the manpower during the solar tracker setup. The results
have shown that the algorithm presents a similar performance comparing to similar algorithms that were
designed only for horizontal fields.
The document summarizes a study that evaluated EUMETSAT's Multi-sensor Precipitation Estimate (MPE) products from Meteosat-8 and Meteosat-9 satellites in comparison with ground-based radar data over Northwestern Europe. Categorical and continuous verification statistics were used to assess differences in spatial distribution and rainfall values between the MPE products and radar data. The results showed that MPE from Meteosat-9 had higher accuracy scores and was better at estimating rainfall values, though both products tended to overestimate during heavy rainfall events. Recommendations included further validation of MPE products over different regions and timescales.
This document presents a new method for detecting land cover changes using RADARSAT-2 polarimetric SAR (PolSAR) images. The method integrates change vector analysis, post-classification comparison, and object-oriented image analysis. It was tested on PolSAR images of an area in China that experiences illegal land development. The proposed method achieved higher accuracy than other techniques in determining various types of land cover changes, such as barren land converting to crops or built-up areas. The use of change vector analysis, post-classification comparison, and object-oriented analysis helped reduce false alarms from classification errors and environmental changes.
The Quantum Theory Group at the University of Glasgow conducts research in several areas including quantum optics, quantum information, foundations of quantum mechanics, and light-matter interactions. The group has academic staff, research fellows, and PhD students. Specific research topics include boson sampling, quantum state discrimination, quantum measurement, open quantum systems, chiral molecules, optical angular momentum, and optical forces.
This document summarizes work from an optimization subgroup in remote sensing. It discusses three main topics: 1) defining optimization problems and algorithms in remote sensing, 2) the role of optimization in remote sensing retrievals, and 3) potential projects focused on improving temperature and humidity profile retrievals from satellite data and intersatellite calibration. The subgroup explored using neural networks and Gaussian processes to develop atmospheric temperature and humidity profiles from HIRS satellite data.
This document presents a new algorithm called CRT3D that can perform real-time 3D color reconstruction from single-photon lidar data. It achieves this by using a spectral subsampling approach that measures only a single wavelength per pixel, reducing data size while still enabling color reconstructions. The key steps are to denoise the point cloud using algebraic point set surfaces, denoise intensities using bilateral filtering, and denoise backgrounds using Wiener filtering. Experiments show CRT3D achieves real-time performance while producing more accurate color reconstructions compared to previous methods.
The document describes GRASP, a versatile algorithm for characterizing atmospheric properties from remote sensing observations. GRASP can retrieve a variety of properties including aerosol optical thickness, single scattering albedo, vertical aerosol profiles, and surface reflectance from sensors on satellites, ground-based radiometers, and aircraft. It is based on generalized principles to develop a rigorous, efficient, and accessible algorithm. GRASP uses a statistical optimization fitting approach and flexible forward model to simulate observations and retrieve multiple parameters simultaneously. It is being applied to process data from various satellite instruments to improve aerosol retrievals over land and develop synergistic retrievals using non-coincident observations.
I will describe a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious mean measurement-model parameterization, we first rewrite the measurement equation by changing the integral variable from photon energy to mass attenuation, which allows us to combine the variations brought by the unknown incident spectrum and mass attenuation into a single unknown mass-attenuation spectrum function; the resulting measurement equation has the Laplace integral form. The mass-attenuation spectrum is then expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image. This algorithm alternates between a Nesterov’s proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden–Fletcher–Goldfarb–Shanno with box constraints (L- BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density-map NPG steps, we apply a step-size selection scheme that accounts for varying local Lipschitz constants of the objective function. I will discuss the biconvexity of the penalized NLL function and outline preliminary results on convergence of PG-BFGS schemes. Finally, I will present real X-ray CT reconstruction examples that demonstrate the performance of the proposed scheme.
The document summarizes research on using sparse regression techniques for hyperspectral image unmixing. It introduces the linear mixing model and describes algorithms like orthogonal matching pursuit (OMP) and basis pursuit (BP) that find sparse solutions. The techniques are applied to simulated and real hyperspectral data. Results show that introducing new regularizers like total variation and group lasso can improve accuracy over l1 regularization alone by better handling the high coherence between spectral library elements.
Hyperspectral unmixing using novel conversion model.pptgrssieee
The document presents a novel hyperspectral unmixing approach called uccm-SVM that converts the abundance quantification problem into a classification problem using support vector machines. The approach is tested on both simulated and real hyperspectral images and is shown to outperform traditional mean-based techniques like FCLS in terms of accuracy while having lower computational costs for smaller training set sizes. Future work to improve the method includes enhancing performance while reducing computation for larger training sets.
This document discusses counterfactual modeling for determining causality in epidemiology. It explains that ideally, a causal contrast compares disease frequency between an exposed group and an unexposed group that are from the same target population during the same time period, ensuring identical initial conditions. However, in reality the unexposed counterfactual state is not observed, so a substitute population is used that may differ in initial conditions, allowing for confounding. Confounding arises when the substitute group imperfectly represents what the target population would have been like without exposure. Randomized controlled trials can help simulate the counterfactual comparison by randomizing subjects to treatment or control, making the groups comparable at baseline and reducing confounding.
Bonsai Networking: pruning your professional learning network (VU Seminar)Joyce Seitzinger
This document discusses pruning and growing a professional learning network (PLN). It begins by introducing Joyce Seitzinger and her role in e-learning. She then shares her extensive social media presence and experience organizing online events. The next section discusses a networked practice program she led. The rest of the document provides advice on establishing a PLN, including maintaining connections through various tools, curating an online identity and portfolio, and regularly reviewing one's areas of interest to prune and grow their learning network.
Python Raster Function - Esri Developer Conference - 2015akferoz07
This document discusses Python raster functions in ArcGIS, which allow users to implement raster analysis and processing algorithms from Python. Key points covered include:
- Raster functions transform one raster dataset into another on-the-fly through a chain of functions.
- Python raster functions are implemented as Python modules that participate in the raster function chain via an ArcGIS adapter.
- Examples demonstrate how to perform tasks like linear spectral unmixing from Python.
- The API for creating Python raster functions and templates is explained.
- Considerations for publishing and optimizing Python raster functions are provided.
ROBUST UNMIXING OF HYPERSPECTRAL IMAGES: APPLICATION TO MARSgrssieee
The document discusses developing a pipeline for unmixing hyperspectral mineral data. It aims to (1) separate spectral pixels into families through image segmentation while being sensitive to subtle mineral differences, (2) reduce dimensionality in a way that preserves clusters and local geometry, and (3) operate on sections of the image independently for unmixing and pruning. The pipeline involves preprocessing, dimensionality reduction, clustering, unmixing, and pruning steps. Dimensionality reduction must handle nonlinear data while highlighting natural clusters for 2-3D visualization.
This document summarizes a new algorithm called MewDC-NMF for unsupervised unmixing of hyperspectral images. MewDC-NMF stands for Minimum endmember-wise Distance Constrained Nonnegative Matrix Factorization. It simultaneously extracts endmembers and estimates abundance fractions without requiring pure pixels. This is accomplished by imposing a distance constraint between endmembers to make their spectra more compact during optimization. Experiments on synthetic and real AVIRIS data show MewDC-NMF outperforms other constrained NMF methods in extracting more accurate endmembers and estimating abundances.
The document proposes a new approach for hyperspectral image classification that incorporates spectral unmixing. It aims to improve classification map spatial resolution by (1) using unsupervised clustering to identify classes from hyperspectral data, (2) computing abundance maps through spectral unmixing, (3) creating a finer classification map, and (4) applying spatial regularization. Experiments on real data show the approach increases classification accuracy and spatial resolution compared to traditional techniques.
SIMPLEX VOLUME ANALYSIS BASED ON TRIANGULAR FACTORIZATION: A FRAMEWORK FOR HY...grssieee
The document proposes a new method called Simplex Volume Analysis based on Triangular Factorization (SVATF) for hyperspectral unmixing. SVATF simplifies the endmember extraction process by using Cholesky factorization to calculate the volume of a simplex, allowing endmembers to be identified by maximizing values along the diagonal of the factorization. This reduces computational costs compared to existing methods. The paper evaluates SVATF on both synthetic and real hyperspectral data, finding the new approach can perform endmember extraction faster than alternatives with or without dimensionality reduction.
1) The document discusses using spectral unmixing and support vector machines (SVM) to classify airborne hyperspectral imagery to detect giant reed, an invasive plant along the Rio Grande in Texas.
2) Linear spectral unmixing (LSU), mixture tuned matched filtering (MTMF), and SVM were used to classify imagery from 2009 and 2010, with SVM producing the most accurate classifications.
3) The research aims to assess the effectiveness of biological control agents released against giant reed using remote sensing and estimates the water consumption of giant reed and surrounding vegetation.
This document presents a new subspace approach for supervised classification of hyperspectral images. It evaluates using subspace projection techniques like PCA and HySime prior to classification with classifiers like LDA, SVM, and MLR. Experiments on AVIRIS Indian Pines and ROSIS Pavia University datasets show subspace projection improves classification accuracy, especially with limited training samples. Future work will evaluate more subspace projections and datasets to further improve hyperspectral image classification.
IEEE 2014 MATLAB IMAGE PROCESSING PROJECTS Does deblurring improve geometrica...IEEEBEBTECHSTUDENTPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
This document contains slides from a lecture on public health given by Alasdair Cohen at UC Berkeley on October 17, 2016. The slides discuss various topics related to research quality including publication bias, p-hacking, negligent science, Cochrane reviews, transparent reporting, and the CONSORT checklist. The CONSORT checklist is explained in detail over several slides, outlining the key elements that should be reported in randomized controlled trials to improve transparency and replication.
This document evaluates using unsupervised linear unmixing of multi-date hyperspectral imagery to estimate crop yields. Vertex component analysis is used to extract spectral endmembers from the imagery. Crop abundance maps derived from linear unmixing show strong correlations with actual crop yield data, and fusing results from images on different dates improves the correlation further.
Signal Processing Course : Compressed SensingGabriel Peyré
Compressive sensing allows acquiring signals at a rate below Nyquist by exploiting sparsity. Measurements are taken using a projection of the signal onto random patterns. Recovery is posed as an optimization problem promoting sparsity. Theoretical guarantees ensure exact recovery is possible from a sufficient number of random linear measurements using convex relaxation. Restricted isometry properties of the measurement matrix and polytope geometry analysis provide conditions for stable and robust signal reconstruction.
This document discusses sparse representations and dictionary learning. It introduces the concepts of sparsity, redundant dictionaries, and sparse coding. The goal of sparse coding is to find the sparsest representation of signals using an overcomplete dictionary. Dictionary learning aims to learn an optimized dictionary from exemplar data by alternately solving sparse coding subproblems and dictionary update steps. Patch-based dictionary learning has applications in image denoising and texture synthesis. In contrast to PCA, learned dictionaries contain non-linear atoms adapted to the data.
This document provides an overview of statistical estimation and inference. It discusses point estimation, which provides a single value to estimate an unknown population parameter, and interval estimation, which gives a range of plausible values for the parameter. The key aspects of interval estimation are confidence intervals, which provide a probability statement about where the true population parameter lies. The document also covers important concepts like sampling distributions, the central limit theorem, and factors that influence the width of a confidence interval like sample size. Examples are provided to demonstrate calculating point estimates, confidence intervals, and dealing with independent samples.
Spandana image processing and compression techniques (7840228)indianspandana
This document discusses image processing and compression techniques. It begins by stating the objectives of image processing as sharpening images, minimizing degradation, and reducing the amount of memory needed to store images through compression. It then provides introductions and definitions related to digital image processing. The key stages of digital image processing are described as image acquisition, enhancement, restoration, morphological processing, segmentation, object recognition, representation and description, color image processing, and compression. Image compression aims to reduce the amount of data required to represent a digital image by removing redundant data. The goals and flow of image compression are explained. Finally, different compression techniques such as lossless compression using run-length encoding and lossy compression are outlined.
The document discusses various post-pruning methods for decision trees, including reduced error pruning, pessimistic error pruning, error complexity pruning, minimum error pruning, and cost-based pruning. Reduced error pruning uses separate training, validation, and test datasets to prune nodes that do not decrease validation error. Pessimistic error pruning avoids a separate validation set by using the training set for both growing and pruning the tree. Other methods prune nodes based on minimizing error complexity, expected error rate, or consideration of both error rate and costs.
This document discusses inferential statistics and confidence intervals. It introduces confidence intervals for a population mean using the t-distribution when the sample size is small (less than 30). When the population variance is known, the z-distribution can be used. It provides examples of how to calculate 95% and 99% confidence intervals for a population mean using the t-distribution and normal distribution. Formulas for the standard error and reliability coefficients are also presented.
Ill-posedness formulation of the emission source localization in the radio- d...Ahmed Ammar Rebai PhD
To contact the authors : tarek.salhi@gmail.com and ahmed.rebai2@gmail.com
In the field of radio detection in astroparticle physics, many studies have shown the strong dependence of the solution of the radio-transient sources localization problem (the radio-shower time of arrival on antennas) such solutions are purely numerical artifacts. Based on a detailed analysis of some already published results of radio-detection experiments like : CODALEMA 3 in France, AERA in Argentina and TREND in China, we demonstrate the ill-posed character of this problem in the sens of Hadamard. Two approaches have been used as the existence of solutions degeneration and the bad conditioning of the mathematical formulation problem. A comparison between experimental results and simulations have been made, to highlight the mathematical studies. Many properties of the non-linear least square function are discussed such as the configuration of the set of solutions and the bias.
This document proposes a new radiometric concept called a cross-beam interferometer to retrieve 3D profiles of atmospheric temperature and water vapor density. It would use two antennas separated by a distance to measure brightness temperatures from different atmospheric volumes independently, without being subject to the cumulative effects of the radiative transfer equation. Theoretical development and initial simulation results exploring the technique's spatial resolution and sensitivity to antenna spacing are presented. Open issues that require further study include calibration methods, estimating radiometric resolution for water vapor content retrieval accuracy, and performing onion peeling retrievals from simulated atmospheres.
This document proposes a new radiometric concept called a cross-beam interferometer to retrieve 3D profiles of atmospheric temperature and water vapor density. It would use two antennas separated by a distance to measure brightness temperatures from different atmospheric volumes independently, without being subject to the cumulative effects of the radiative transfer equation. Theoretical development and initial simulation results exploring the technique's spatial resolution and sensitivity to antenna spacing are presented. Open issues that require further study include calibration methods, estimating radiometric resolution for water vapor content retrieval accuracy, and applying retrieval techniques like "onion peeling" to simulated atmospheres.
Multiscale Granger Causality and Information Decomposition danielemarinazzo
Slides for the C3S workshop in Cologne, in which I presented an extension to multiple temporal scale of Granger causality and Partial Information Decomposition, using the robust state space formulation
ESA SMOS (Soil Moisture and Ocean Salinity) Mission: Principles of Operation ...adrianocamps
SMOS basic principles and description of some products developed at the SMOS Barcelona Expert Center.
Disclaimer: these materials were prepared for Eduacational purposes only.
1. The document discusses using statistical learning methods like Gaussian mixture models (GMM) and dynamic component allocation (DCA) for hyperspectral image classification.
2. GMM represents pixel spectra as mixtures of Gaussian distributions. DCA is an algorithm that dynamically adds and removes Gaussian components to better characterize the data during training.
3. The document outlines how DCA works, including merging similar Gaussian modes, splitting modes with high kurtosis, and pruning insignificant modes. These techniques aim to learn the appropriate number of mixture components from the data.
Ellipsometry- non destructive measuring methodViji Vijitha
Ellipsometry is a non-destructive optical technique that measures the change in polarization state of light upon reflection from or transmission through a sample. It can be used to characterize properties like thickness, composition, and crystallinity of thin films. The document discusses the history and principles of ellipsometry, experimental setups, data analysis techniques using modeling to extract sample properties, and applications in measuring films. Modeling involves using equations to describe light-material interactions and minimizing errors between calculated and measured polarization states.
Presentation made by Prof. Adriano Camps (Universitat Politècnica de Catalunya) at ICMARS 2010 (India, 16-December-2010) on the MIRAS instrument aboard ESA's SMOS mission.
Optimization of Manufacture of Field-Effect Heterotransistors without P-N-Jun...ijrap
This document summarizes research on optimizing the manufacturing of field-effect heterotransistors without p-n junctions to decrease their dimensions. It discusses how using laser or microwave annealing during doping can increase the sharpness of junctions and homogeneity of the dopant distribution, allowing for more compact transistors. It also describes using the inherent inhomogeneity of heterostructures and annealing optimization. The document outlines modeling the redistribution of dopants and defects during annealing through solving diffusion equations to optimize the annealing time and decrease transistor size.
Optimization of Manufacture of Field-Effect Heterotransistors without P-N-Jun...ijrap
It has been recently shown, that manufacturing p-n-junctions, field-effect and bipolar transistors, thyristors in a multilayer structure by diffusion or ion implantation under condition of optimization of dopant and/or radiation defects leads to increasing of sharpness of p-n-junctions (both single p-n-junctions and p-njunctions, which include into their system). In this situation one can also obtain increasing of homogeneity of dopant in doped area. In this paper we consider manufacturing a field-effect heterotransistor without pn-junction. Optimization of technological process with using inhomogeneity of heterostructure give us possibility to manufacture the transistors as more compact.
ON DECREASING OF DIMENSIONS OF FIELDEFFECT TRANSISTORS WITH SEVERAL SOURCESmsejjournal
We analyzed mass and heat transport during manufacturing field-effect heterotransistors with several
sources to decrease their dimensions. Framework the result of manufacturing it is necessary to manufacture
heterostructure with specific configuration. After that it is necessary to dope required areas of the heterostructure by diffusion or ion implantation to manufacture the required type of conductivity (p or n). After
the doping it is necessary to do optimize annealing. We introduce an analytical approach to prognosis mass
and heat transport during technological processes. Using the approach leads to take into account nonlinearity of mass and heat transport and variation in space and time (at one time) physical parameters of these
processes
ON DECREASING OF DIMENSIONS OF FIELDEFFECT TRANSISTORS WITH SEVERAL SOURCESmsejjournal
This document analyzes mass and heat transport during manufacturing field-effect heterotransistors with several sources to decrease their dimensions. An analytical approach is introduced to model mass and heat transport during technological processes like doping and annealing. This approach accounts for nonlinearities in mass and heat transport and variations in physical parameters over space and time. The goal is to optimize doping distributions to increase compactness and homogeneity of transistor elements. Equations are developed to model concentration distributions of dopants and point defects over space and time during diffusion and ion implantation doping processes and subsequent annealing.
This presentation investigates the hypersonic high enthalpy flow in a leading edge configuration using computational techniques, specifically using computational fluid dynamics.
Flow separation in/over a hypersonic space vehicle is an important phenomenon which occurs due to flow interaction with various geometric elements of the vehicle. This however can lead to adverse pressure gradient and localised intense heating resulting in detrimental consequences for the successful performance of the vehicle. It is therefore critical and necessary to understand the separation phenomenon and its characteristics. In the last several decades, experimental, analytical and computational techniques have been used to investigate flow separation in hypersonic flow. Despite these efforts, large gaps still remain in our understanding of the aerothermodynamics of flow separation. Typically, flow separation can be examined with simple geometric configurations representing a generic region of separated flow over a vehicle. These could range from geometries such as compression corners, flat plate with steps to blunt bodies such as cylinders and spheres. However, most of these configurations exhibit a pre-existing boundary layer prior to separation thus increasing the complexity of the interaction. A simple geometry capable of producing separation at the leading-edge without any pre-existing boundary layer is therefore considered here. This geometry was originally proposed by Chapman in 1958 for supersonic flows at high Reynolds numbers and is investigated here numerically under laminar low density hypersonic conditions using N-S and DSMC methods.
This document discusses dimensionality reduction techniques for hyperspectral images. It proposes using k-means clustering based on statistical measures like variance, standard deviation, and mean absolute deviation to select bands from hyperspectral images. The number of bands is first estimated using virtual dimensionality. Bands are then clustered based on their statistical properties and one band is selected from each cluster with the maximum value of the statistical measure. Finally, endmembers are extracted from the selected bands using N-FINDR.
Improving Physical Parametrizations in Climate Models using Machine LearningNoah Brenowitz
This document discusses using machine learning to improve climate model physical parameterizations. It begins by introducing common issues with coarse resolution climate models, such as biases in mean climate states and variability. It then presents machine learning as a way to build flexible parameterization functions from high-resolution model data. Specifically, it explores using neural networks trained on single-column outputs from a global cloud-resolving model to provide tendencies like heating and moistening. While initial diagnostic tests showed skill, prognostic tests revealed issues with stability. The document discusses various approaches to address these issues, such as incorporating momentum tendencies and using stochastic and data assimilation methods. Overall, the work aims to develop machine learning parameterizations for climate models that are numerically stable and accurately represent sub
On Decreasing of Dimensions of Field-Effect Transistors with Several Sourcesmsejjournal
We analyzed mass and heat transport during manufacturing field-effect heterotransistors with several
sources to decrease their dimensions. Framework the result of manufacturing it is necessary to manufacture
heterostructure with specific configuration. After that it is necessary to dope required areas of the heterostructure by diffusion or ion implantation to manufacture the required type of conductivity (p or n). After
the doping it is necessary to do optimize annealing. We introduce an analytical approach to prognosis mass
and heat transport during technological processes. Using the approach leads to take into account nonlinearity of mass and heat transport and variation in space and time (at one time) physical parameters of these
processes
mpact of Urbanization on Land Surface Temperature - A Case Study of Kolkata N...theijes
Land Surface Temperature depends on the nature of land surface. Water bodies, green fields remain cooler than bare ground and built-up area. Kolkata New Town is emerging in agricultural area through filling of big water bodies and converting cultivated lands into built up area. This study aims to analyse the changes in land surface temperature with advent of the town. It is found that the land surface temperature is increasing sharply. The mean temperature of the area was 22.8°C in 1996 which became 24.9°C in 2004 and 26.4°C in 2014. Spatial variation was sharp during early stage of project i.e. 1996-2004. Besides The relation of built-up (NDBI) and LST is found positively co-related with a r value of 0.76 in 2004 and 0.73 in 2014 and the relation with vegetation (NDVI) is negatively related and the r value is -.0.35 in both the years of 2004 and 2014. Several patches of heat zones are now being popped up these zones have been identified on the map of Kolkata New Town. The study suggests toconsider the possible micro-climatic changes in town planning for the sustainable development.
This document reports on a study of the molecular structure, vibrational spectra, and electronic properties of (E)-N0-(4-Methoxybenzylidene)pyridine-3-carbohydrazide dihydrate (MBP3CDÁ2H2O) using density functional theory calculations and experimental techniques. The authors synthesized the compound and characterized it using FTIR, FT-Raman, and UV–Vis spectroscopy. They then used DFT calculations to optimize the molecular geometry, simulate the vibrational spectra, and analyze properties like hyperpolarizability. The calculated spectra agreed well with experimental data. Analysis of molecular orbitals, reactivity, and thermodynamics provided insight into
Optimization of technological process to decrease dimensions of circuits xor ...ijfcstjournal
The paper describes an approach of increasing of integration rate of elements of integrated circuits. The
approach has been illustrated by example of manufacturing of a circuit XOR. Framework the approach one
should manufacture a heterostructure with specific configuration. After that several special areas of the
heterostructure should be doped by diffusion and/or ion implantation and optimization of annealing of dopant
and/or radiation defects. We analyzed redistribution of dopant with account redistribution of radiation
defects to formulate recommendations to decrease dimensions of integrated circuits by using analytical
approaches of modeling of technological process.
This document discusses various techniques for structurally characterizing nanoparticles, including transmission electron microscopy (TEM), X-ray diffraction, and simulations. TEM can be used to obtain selected area electron diffraction patterns, high resolution TEM images, and particle assembly information. X-ray diffraction can determine particle size and distributions through peak broadening and the Scherrer equation. Both techniques along with simulations are useful for analyzing structure, orientation, defects and composition of nanoparticles.
1. Analysis of the unmixing on thermal hyperspectral
imaging
Manuel Cubero-Castan
PhD Supervisors: Jocelyn CHANUSSOT and Xavier BRIOTTET
Co-supervisors: Véronique ACHARD and Michal SHIMONI
PhD Defense
24th of October, 2014
2. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Thermal hyperspectral remote sensing in geology
Emissivity Strip over
Virginia City (NV) (RGB =
11.08, 9.6 and 9.02 µm)
Studied area composed of Quartz, Jarosite and Clay mixture.
Photograph of the trimodal tailings site on the up-right.
∗
Vaughan, Calvin et Taranik, SEBASS hyperspectral thermal infrared data: surface missivity measurement
and mineral mapping, in RSE 2003
Manuel Cubero-Castan - PhD defense - 24/10/2014 2 / 41
3. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Outline
1 Introduction
Context of the work
2 Unmixing strategies on mixed pixels
FCLS-R - Unmixing on the radiance image
FCLS-E - Unmixing on the estimated emissivities
TRUST - Thermal Remote sensing Unmixing for Subpixel
Temperature
3 Results on synthetic and real data
Assessment of the three unmixing strategies on a supervised
approach
Towards an unsupervised approach
4 Conclusions and perspectives
Manuel Cubero-Castan - PhD defense - 24/10/2014 3 / 41
4. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Outline
1 Introduction
Context of the work
2 Unmixing strategies on mixed pixels
3 Results on synthetic and real data
4 Conclusions and perspectives
Manuel Cubero-Castan - PhD defense - 24/10/2014 4 / 41
5. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Introduction to thermal hyperspectral remote sensing
Thermal
Hyperspectral
Remote Sensing
Manuel Cubero-Castan - PhD defense - 24/10/2014 5 / 41
6. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Objective of the thesis
Material parameters
Emissivity (optical property), temperature and abundance
(material proportion on pixels)
Pure pixel: estimation of temper-
ature and emissivity of the material
composing the pure pixel.
Mixed pixel: dependent of emissiv-
ity, temperature and abundance of
each material composing the pixel.
Heterogeneous scene composed of
three materials
Main Goal
Estimate the temperature, the emissivity and the abundance of
materials on pure and mixed pixels.
Manuel Cubero-Castan - PhD defense - 24/10/2014 6 / 41
7. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
State-of-art of the unmixing methods in TIR domain
Two different approaches
First approach: Unmixing on a co-registered VIS image
Increase the spatial resolution of the thermal image ∗
Temperature study of classes of material †
Second approach: Considering isothermal pixels ‡
⇒ Same approach as in VIS domain.§
Lack of a general approach to solve the unmixing in the TIR
domain.
∗
Zhukov et al. TM/Landsat thermal image unmixing, in SPIE Proc. 1997
†
Deng et al. Examining the impacts of urban biophysical compositions on surface urban heat island: A
spectral unmixing and thermal mixing approach, in RSE 2013
‡
Colins et al. Spectral mixture analysis of simulated thermal infrared spectrometry data: An initial
temperature estimation bounded tessma search approach, in IEEE TGRS 2001
§
Vaughan et al., SEBASS hyperspectral thermal infrared data: surface emissivity measurement and mineral
mapping, in RSE 2003
Manuel Cubero-Castan - PhD defense - 24/10/2014 7 / 41
8. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Radiative transfer model - pure pixel
Rλ
sens = ελ
· Bλ
(T) · τλ
atm,↑ + (1 − ελ
) · Rλ
atm,↓ · τλ
atm,↑
Rλ
BOA·τλ
atm,↑
+Rλ
atm,↑
Estimation of temperature and emissivity
1 Atmospheric Compensation
Known atmosphere
2 Decoupling Temp. and Emiss.
Ill-posed problem: N equations
and N + 1 unknowns (N spectral
bands number)
Methods exist (TES∗
, SpSm†
, ...)
∗
Gillespie et al. A temperature emissivity separation for ASTER Images, in IEEE TGRS 1998
†
Borel et al. Surface emissivity and temperature retrieval for a hyperspectral sensor, in IGARSS 1998
Manuel Cubero-Castan - PhD defense - 24/10/2014 8 / 41
9. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Radiative transfer model - mixed pixel 1/2
Rλ,x,y
sens =
M
m=1
ελ
m · Bλ
(Tx,y
m ) + (1 − ελ
m) · Rλ
atm,↓ · τλ
atm,↑ + Rλ
atm,↑
Rλ,x,y
sens,m
·Sx,y
m
M = 2 Materials
Estimation of abundance
Three steps:∗
the number of endmembers,
the endmembers themselves,
their abundances.
How to define the endmembers ?
V-NIR = reflectance. TIR = ???
∗
Bioucas-Dias et al. Hyperspectral unmixing overview: Geometrical, statistical and sparse regression-based
approach, in IEEE JSTARS 2012
Manuel Cubero-Castan - PhD defense - 24/10/2014 9 / 41
10. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Radiative transfer model - mixed pixel 2/2
Rλ,x,y
sens =
M
m=1
Rλ,x,y
sens,m(ελ
m, Tm, atm) · Sx,y
m
Three definitions of endmember
A unique couple of emissivity and temperature
⇒ FCLS-R: To unmix the radiance at sensor level
A unique emissivity spectrum
⇒ FCLS-E: To unmix the emissivity estimated with TES method
A unique emissivity spectrum and a range of temperature
⇒ TRUST: To unmix the radiance at sensor level knowing the
emissivity∗
Manuel Cubero-Castan - PhD defense - 24/10/2014 10 / 41
11. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Radiative transfer model - mixed pixel 2/2
Rλ,x,y
sens = Rλ,x,y
sens (
<ε>x,y
M
m=1
ελ
m · Sx,y
m , < T >x,y
, atm)
Three definitions of endmember
A unique couple of emissivity and temperature
⇒ FCLS-R: To unmix the radiance at sensor level
A unique emissivity spectrum
⇒ FCLS-E: To unmix the emissivity estimated with TES method
A unique emissivity spectrum and a range of temperature
⇒ TRUST: To unmix the radiance at sensor level knowing the
emissivity∗
Manuel Cubero-Castan - PhD defense - 24/10/2014 10 / 41
12. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Radiative transfer model - mixed pixel 2/2
Rλ,x,y
sens =
M
m=1
Rλ,x,y
sens,m(ελ
m, Tx,y
m , atm) · Sx,y
m
Three definitions of endmember
A unique couple of emissivity and temperature
⇒ FCLS-R: To unmix the radiance at sensor level
A unique emissivity spectrum
⇒ FCLS-E: To unmix the emissivity estimated with TES method
A unique emissivity spectrum and a range of temperature
⇒ TRUST: To unmix the radiance at sensor level knowing the
emissivity∗
∗
Cubero-Castan et al. A physics-based unmixing method to estimate subpixel temperatures on mixed pixels,
in IEEE TGRS 2015
Manuel Cubero-Castan - PhD defense - 24/10/2014 10 / 41
13. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Outline
1 Introduction
2 Unmixing strategies on mixed pixels
FCLS-R - Unmixing on the radiance image
FCLS-E - Unmixing on the estimated emissivities
TRUST - Thermal Remote sensing Unmixing for Subpixel
Temperature
3 Results on synthetic and real data
4 Conclusions and perspectives
Manuel Cubero-Castan - PhD defense - 24/10/2014 11 / 41
14. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
FCLS-R: Linear unmixing model on at-sensor radiances
Rλ,x,y
sens =
M
m=1
Rλ,x,y
sens,m(ελ
m, Tm, atm) · Sx,y
m
Linear mixing model on at-sensor radiance ⇒ Linear unmixing
method
FCLS-R Fully Constrain Linear Square Unmixing (FCLS
Unmixing)∗ applied on at-sensor radiances
∗
Heinz et Chang, Fully Constrain least squares linear spectral mixture analysis method for material
quantification in hyperspectral imagery, in IEEE TGRS 2001
Manuel Cubero-Castan - PhD defense - 24/10/2014 12 / 41
15. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
FCLS-E: Aggregation Principle
Rλ,x,y
sens =
M
m=1
Rλ,x,y
sens,m(ελ
m, Tx,y
m , atm) · Sx,y
m = Rλ,x,y
sens ( ελ
, T , atm)
≡
Rλ,x,y
sens leads to aggregation models ε & T ∗.
∗
Cubero-Castan et al. Physic based aggregation model for the unmixing of temperature and optical properties
in the infrared domain, in IEEE WHISPERS 2012
Manuel Cubero-Castan - PhD defense - 24/10/2014 13 / 41
16. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
FCLS-E: Linear aggregation model for emissivity
M
m=1
ελ
m · B(Tx,y
m ) + (1 − ελ
m) · Rλ
atm,↓ ·Sx,y
m = ε ·B( T )+(1− ε )·Rλ
atm,↓
Aim: To build aggregation models identifying each radiative term.∗
ε =
m
ελ
m · Sx,y
m
T = (Bλ)−1
m
ελ
m·B(Tx,y
m )·Sx,y
m
m
ελ
m·Sx,y
m
Linear model on ε
Linear unmixing methods
∗
Cubero-Castan et al. Physic based aggregation model for the unmixing of temperature and optical properties
in the infrared domain, in IEEE WHISPERS 2012
Manuel Cubero-Castan - PhD defense - 24/10/2014 14 / 41
17. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
FCLS-E: General Structure
Rλ,x,y
sens = Rλ,x,y
sens (
<ε>x,y
M
m=1
ελ
m · Sx,y
m , < T >x,y
, atm)
A two steps process
1 Estimation of emissivity and
temperature
2 Linear unmixing on emissivity
FCLS Unmixing
Is this aggregation model similar
to the TES estimations of
emissivity and temperature ?
Manuel Cubero-Castan - PhD defense - 24/10/2014 15 / 41
18. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
FCLS-E: Comparison with TES structures
Aim: To compare the aggregation models with TES estimation.∗
Observation: Good agreement (RMSE on ε < 1.5% and RMSE
on T < 1K) between TES estimations and aggregation models, for
mixed pixels with Tmax − Tmin < 15K.
∗
Cubero-Castan et al. The comparability of aggregated emissivity and temperature of heterogeneous pixel to
conventional TES methods, in IEEE WHISPERS 2013
Manuel Cubero-Castan - PhD defense - 24/10/2014 16 / 41
19. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
TRUST: General structure
A two-step process
Estimation of the mean temperature and the emissivity on
pure pixels, with a supervised or unsupervised localization.
Estimation of the subpixel temperature and the
abundances with TRUST method.
Manuel Cubero-Castan - PhD defense - 24/10/2014 17 / 41
20. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
TRUST: A two-estimation unmixing algorithm
A two-estimation unmixing algorithm
Estimation of the subpixel temperature Tx,y
i knowing the
abundance Sx,y
i ,
Estimation of the abundances using the subpixel temperature
estimator.
Manuel Cubero-Castan - PhD defense - 24/10/2014 18 / 41
21. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
TRUST: Estimation of the subpixel temperatures
Aim: To estimate the subpixel tem-
peratures on a mixed pixel knowing
the abundances, the emissivity and
the mean temperature of materials.∗
Liberalization of the Black Body law Bλ(Ti) around the mean
temperature for each material.
Best Linear Unbiased Estimator (BLUE).
∆T = (At
· C−1
· A)−1
· At
· C−1
· ∆R
∆Rλj = RBOA(Ti) − RBOA(Ti)
ATi
λj
= εi · Si · ∂B
λj (T)
∂T
Ti
∆Ti = Ti − Ti
Results: Good estimation (RMSE < 2K) with few materials within
the mixed pixels and sufficient abundances.
∗
Cubero-Castan et al. An unmixing-based method for the analysis of thermal hyperspectral images, in IEEE
ICASSP’14
Manuel Cubero-Castan - PhD defense - 24/10/2014 19 / 41
22. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
TRUST: Impact of a misestimation of the abundances
Aim: To analyse the impact of an error on the abundance.
Simulation: A mixed pixel with two materials (50%/50%, ASTER
lib., 285K-330K, TASI sensor with 0.03 W.sr−1.m−2.µm−1 noise
level). Shift on abundance between -20% and +20%.
Study of the reconstruction error (RMSE).
Big impact of the accuracy of the
abundance in the reconstruction
error.
Minimum of the reconstruction error
with the good abundance.
Idea
Estimate the abundance by minimizing the reconstruction error
Manuel Cubero-Castan - PhD defense - 24/10/2014 20 / 41
23. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
TRUST: Joint estimation of the temp. & the abund.
Aim: To jointly estimate the subpixel
temperatures and the abundances on
mixed pixels.∗
Use of the previous BLUE
estimator.
D(Sx,y
i , Tx,y
i ) = 1
N
·
λ
Rλ,x,y
BOA − Rλ
BOA(Sx,y
i , Tx,y
i )
2
+ γ · 1
M
i
Tx,y
i − Ti
2
Data dependency: simulation of BOA radiance with Sx,y
i
and Tx,y
i .
Physical constraint: Temperature Tx,y
i has to be close to Ti
∗
Cubero-Castan et al. A physics-based unmixing method for thermal hyperspectral images, in IEEE ICIP’14
Manuel Cubero-Castan - PhD defense - 24/10/2014 21 / 41
24. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Different strategies to unmix TIR data
FCLS-R FCLS-E TRUST
Manuel Cubero-Castan - PhD defense - 24/10/2014 22 / 41
25. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Outline
1 Introduction
2 Unmixing strategies on mixed pixels
3 Results on synthetic and real data
Assessment of the three unmixing strategies on a supervised
approach
Towards an unsupervised approach
4 Conclusions and perspectives
Manuel Cubero-Castan - PhD defense - 24/10/2014 23 / 41
26. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Results on synthetic and real data
Two main results
Assessment of the three unmixing strategies on a supervised
approach
On synthetic data - global comparison and impact of the
spatial temperature distribution
On real data - DUCAS campaign
Towards an unsupervised approach
Estimation of the number of endmembers,
Estimation of the endmembers.
Manuel Cubero-Castan - PhD defense - 24/10/2014 24 / 41
27. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Synthetic scene: Comparative study 1/3
Aim: To compare the TRUST method with supervised unmixing
on at-sensor radiances and emissivity spectra.
Simulation: A two-material scene (asphalt & gravel - DUCAS cam-
paign) with a Gaussian temperature distribution. Atmosphere and
TASI sensor as DUCAS campaign.
Abundance of Asphalt
Manuel Cubero-Castan - PhD defense - 24/10/2014 25 / 41
28. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Synthetic scene: Comparative study 2/3
Root Mean Square Error (RMSE) on abundance estimation
(ES in %) using the three unmixing methods.
FCLS-R Method FCLS-E Method TRUST Method
ES (%) ET (K)
Pixels FCLS-R FCLS-E TRUST TRUST
Asph. 1.8 1.4 < 0.1 1.1
Grav. 2.0 1.9 < 0.1 1.3
Mixed 3.1 9.9 1.7 1.7
Total 2.4 5.0 0.7 1.5
Observation
TRUST performs a better
unmixing
Access to subpixel tempe-
rature
Manuel Cubero-Castan - PhD defense - 24/10/2014 26 / 41
29. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Synthetic scene: Comparative study 3/3
Simulation: Variation of temperature parameters: standard devia-
tion σT = [0, 5] K and temperature difference dT = [0, 40] K.
FCLS-R Method FCLS-E Method TRUST Method
Observation
FCLS-R Increase of ES with T distribution parameters.
FCLS-E No impact of T distribution but constant error.
TRUST No impact of T distribution with good performances.
Manuel Cubero-Castan - PhD defense - 24/10/2014 27 / 41
30. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
DUCAS: Comparative study 1/2
Aim: To study the unsupervised unmixing on the DUCAS data.
Image: TASI sensor (32 bands, 8-12 µm) with 1m of spatial resolu-
tion. Campaign DUCAS (EDA) at Zeebruges, Belgium (2011).
ImageCANON/FOI
-EDADUCAS
Image Panchromatic
ImageEAGLE/Fraunhofer
IOSB-EDADUCAS
Localisation of pure (RGB)
and mixed pixels
Asph. Grav. 3rd Roof
Tmean 333.2 315.3 316.6
Tstd 1.3 1.4 0.4
TES Estimation of Emiss. & Temp. (K)
Manuel Cubero-Castan - PhD defense - 24/10/2014 28 / 41
31. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
DUCAS: Comparative study 2/2
Aim: To estimate the abundances using the three unmixing strategies.
FCLS-R Method FCLS-E Method TRUST Method
Observation
TRUST method favors sparsity and provides the best results.
FCLS-R and TRUST find nearly similar abundances on the
mixed pixels.
FCLS-E gives the poorest results, mainly on Asphalt & 3rd
floor mixing abundances.
Manuel Cubero-Castan - PhD defense - 24/10/2014 29 / 41
32. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
DUCAS: Subpixel T estimation
Objective: Study the estimation of the tempera-
ture using TRUST method applied on at-sensor
radiance.
TRUST abund. maps (R=Asph.,
G = Grav., B = 3rd roof)
Asphalt Sub. Temp. (K) Gravel Sub. Temp. (K) 3rd roof Sub. Temp. (K)
Observation
Coherent estimation of subpixel temperatures
Manuel Cubero-Castan - PhD defense - 24/10/2014 30 / 41
33. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Unsupervised unmixing chain design
Three steps to design an unsupervised unmixing chain:
Estimation of the number of endmembers,
A parameter-free method: HySime∗
,
Identification of the endmembers,
Geometrical approach (good solution for
high Signal to Noise Ration (SNR) and
without dictionary available): VCA†
,
Estimation of the abundances of each
endmember,
Knowing endmembers leads to FCLS
Unmixing‡
.
Illustration of the geometrical approach:
projection of the 3-material simplex.
∗
Bioucas-Dias et al., Hyperspectral subspace identification, in IEEE TGRS 2008
†
Nascimento et al., Vertex Component Analysis: A fast algorithm to unmix hyperspectral data, in IEEE TGRS
2005
‡
Heinz et al., Fully constrained least square linear spectral mixture analysis method for material quantification
in hyperspectral imagery, in IEEE TGRS 2001
Manuel Cubero-Castan - PhD defense - 24/10/2014 31 / 41
34. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Estimation of the number of endmembers
Strategy: Unsupervised (Hysime + VCA + FCLS) on radiance.
FLCS-Rwith
5materials
TES
estimation
Manuel Cubero-Castan - PhD defense - 24/10/2014 32 / 41
35. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Estimation of the number of endmembers
Strategy: Unsupervised (Hysime + VCA + FCLS) on emissivity.
FCLS-Ewith
8materials
Manuel Cubero-Castan - PhD defense - 24/10/2014 32 / 41
36. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Estimation of the endmembers
Aim: Study the Principal Component Analysis (PCA) projection
on radiance on a synthetic three-material scene (temperatures and
emissivities extracted from DUCAS data).
σT = 0 σT = 0
Single material is well distributed in the corner of the vertex.
The spatial distribution of T blurs the PCA projection.
Manuel Cubero-Castan - PhD defense - 24/10/2014 33 / 41
37. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Estimation of the endmembers
Aim: Study the Principal Component Analysis (PCA) projection on
emissivity on a synthetic three-material scene (temperatures and
emissivities extracted from DUCAS data).
σT = 0 σT = 0
Single material on a manifold distribution, regardless of the temperature
distribution.
Manuel Cubero-Castan - PhD defense - 24/10/2014 33 / 41
38. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Outline
1 Introduction
2 Unmixing strategies on mixed pixels
3 Results on synthetic and real data
4 Conclusions and perspectives
Manuel Cubero-Castan - PhD defense - 24/10/2014 34 / 41
39. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Performances of the three unmixing methods - 1/2
FCLS-R Good estimation of abundances (2.4% error with 2
materials), except with high temperature distribution
(RMSE > 50% with σT = 5K and T1 = T2).
FCLS-E No impact of temperature distribution but emissivity
maps are too noisy (5% to 10% error with 2
materials)
TRUST Good estimation of abundance (0.7% error with 2
materials) and independence of temperature
distribution.
Computation time : 30 s with TRUST, 0.50 s
with FCLS-E and 0.12 s with FCLS-R (320
pixels with 3 materials)
Manuel Cubero-Castan - PhD defense - 24/10/2014 35 / 41
40. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Performances of the three unmixing methods - 2/2
TRUST A new method to unmix thermal infrared images.
New estimation of the subpixel temperature
Validation on synthetic data
Good with 1 or 2 materials composing the pixel
but not with 3 or more materials.
Application on real data
DUCAS - Zeebruges, 2011
AHS-EUFAR - Salon de Provence, 2010
Which method is the most relevant and in which context
using it to unmix an hyperspectral image on TIR domain ?
Manuel Cubero-Castan - PhD defense - 24/10/2014 36 / 41
41. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Recommendations
Recommendation for the supervised unmixing in TIR domain
No spatial distribution of temperature (low σT and high dT )
Supervised unmixing on at-sensor radiances: FCLS-R
High spatial distribution of temperature (high σT )
Joint supervised strategy with unmixing on at-sensor radiances
and TES estimation: TRUST
Recommendation for an unsupervised approach
With small spatial distribution of temperature, a classical
unsupervised approach works on at-sensor radiances.
Overdetermination of the number of endmembers ⇒
post-processing of data to merge similar classes (same
emissivity but different temperature)
Manuel Cubero-Castan - PhD defense - 24/10/2014 37 / 41
42. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Contributions
International peer-review articles
• M. Cubero-Castan, J. Chanussot, X. Briottet, M. Shimoni & V. Achard, “A physics-based unmixing method to
estimate temperature on mixed pixels", IEEE TGRS, vol.53, no.4, pp.1894-1906, Apr. 2015.
• R. Oltra-Carrió, M. Cubero-Castan, X. Briottet & J. Sobrino, “Analysis of the performance of the TES algorithm
over urban areas", IEEE TGRS, vol.52, no.11, pp.6989-6998, Nov. 2014.
International conference participations
• M. Cubero-Castan, J. Chanussot, X. Briottet, M. Shimoni & V. Achard, “An unmixing-based method for the analysis
of thermal hyperspectral images", IEEE ICASSP, 2014, Florence, Italy.
• M. Cubero-Castan, J. Chanussot, V. Achard, X. Briottet & M. Shimoni, “A physics-based unmixing method for
thermal hyperspectral images", IEEE ICIP, 2014, Paris, France.
• M. Cubero-Castan, X. Briottet, V. Achard, M. Shimoni & J. Chanussot, “The Comparability of aggregated
emissivity and temperature of heterogeneous pixel to conventional TES methods", IEEE GRSS WHISPERS,
2013, Gainesville - Florida, USA.
• R. Oltra-Carrió, J.A. Sobrino, M. Cubero-Castan, X. Briottet, Performance of TES method over urban areas at a
high spatial resolution scale, IEEE GRSS WHISPERS, 2013, Gainesville - Florida, USA
• M. Cubero-Castan, M. Shimoni, X. Briottet, V. Achard & J. Chanussot, “Sensitivity analysis of temperature and
emissivity separation models to radiometric noise", 8th EARSEL, 2013, Nantes, France. (summary)
• M. Cubero-Castan, X. Briottet, M. Shimoni, V. Achard & J. Chanussot, “Physic based aggregation model for the
unmixing of temperature and optical properties in the infrared domain", IEEE GRSS WHISPERS, 2012,
Shanghaï, China.
• M. Shimoni, X. Briottet, M. Cubero-Castan, C. Perneel, V. Achard & J. Chanussot, The assessment of aggregate
thermal hyperspectral scene, IEEE GRSS WHISPERS, 2012, Shanghaï, China.
Other participation (french symposium)
• M. Cubero-Castan, X. Briottet, M. Shimoni, V. Achard & J. Chanussot, “Modèle d’agrégation physique pour le
démixage en température et en émissivité dans le domaine infrarouge, SFTH, 2012, Toulouse, France. (only
presentation)
Manuel Cubero-Castan - PhD defense - 24/10/2014 38 / 41
43. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Perspectives - 1/2
TRUST improvement
Unsupervised algorithm on two steps:
Selection of pure pixels for each material,
Speed of the TRUST algorithm
Classification method based on TRUST
Utilisation of Support Vector Machine (SVM) algorithm.
Application of TRUST with few materials (no spectral
feature, silicate absorption, etc.).
SVM Classifier = abundance maps estimated by TRUST.
Manuel Cubero-Castan - PhD defense - 24/10/2014 39 / 41
44. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
Perspectives - 2/2
Temperature as a manifold variation
Variation of temperature as intraclass variability of
endmember.
Apply new unmixing method to diminish the temperature
effect on unmixing results.
(On-going collaboration with L. Drumetz - GIPSA-Lab)
Combine both images acquired on V-NIR and on TIR domain
SYSIPHE: Same spatial resolution VNIR & TIR
THIRSTY: High spatial resolution in VNIR but few spectral
bands ⇒ Spatial information on VNIR image could help TIR
image processing
Manuel Cubero-Castan - PhD defense - 24/10/2014 40 / 41
45. Context Unmixing strategies Results on synthetic and real data Conclusions and perspectives
And then ...
... any questions ?
Manuel Cubero-Castan - PhD defense - 24/10/2014 41 / 41