Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
SEGMENTATION USING ‘NEW’ TEXTURE FEATUREacijjournal
Color, texture, shape and luminance are the prominent features for image segmentation. Texture is an
organized group of spatial repetitive arrangements in an image and it is a vital attribute in many image
processing and computer vision applications. The objective of this work is to segment the texture sub
images from the given arbitrary image. The main contribution of this work is to introduce “NEW” texture
feature descriptor to the image segmentation field. The NEW texture descriptor labels the neighborhood
pixels of a pixel in an image as N,W,NW,NE,WW,NN and NNE(N-North, W-West).To find the prediction
value, the gradient of the intensity functions are calculated. Eight component binary vectors are formed
and compared to prediction value. Finally end up with 256 possible vectors. Fuzzy c-means clustering is
used to segment the similar regions in textural image Extensive experimentation shows that the proposed
methodology works better for segmenting the texture images, and also segmentation performance are
evaluated.
PDE BASED FEATURES FOR TEXTURE ANALYSIS USING WAVELET TRANSFORMIJCI JOURNAL
In the present paper, a novel method of partial differential equation (PDE) based features for texture
analysis using wavelet transform is proposed. The aim of the proposed method is to investigate texture
descriptors that perform better with low computational cost. Wavelet transform is applied to obtain
directional information from the image. Anisotropic diffusion is used to find texture approximation from
directional information. Further, texture approximation is used to compute various statistical features.
LDA is employed to enhance the class separability. The k-NN classifier with tenfold experimentation is
used for classification. The proposed method is evaluated on Brodatz dataset. The experimental results
demonstrate the effectiveness of the method as compared to the other methods in the literature.
This presentation will give a simple overview of image classification technique using difference type software focusing on object-based image classification and segmentation.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
SEGMENTATION USING ‘NEW’ TEXTURE FEATUREacijjournal
Color, texture, shape and luminance are the prominent features for image segmentation. Texture is an
organized group of spatial repetitive arrangements in an image and it is a vital attribute in many image
processing and computer vision applications. The objective of this work is to segment the texture sub
images from the given arbitrary image. The main contribution of this work is to introduce “NEW” texture
feature descriptor to the image segmentation field. The NEW texture descriptor labels the neighborhood
pixels of a pixel in an image as N,W,NW,NE,WW,NN and NNE(N-North, W-West).To find the prediction
value, the gradient of the intensity functions are calculated. Eight component binary vectors are formed
and compared to prediction value. Finally end up with 256 possible vectors. Fuzzy c-means clustering is
used to segment the similar regions in textural image Extensive experimentation shows that the proposed
methodology works better for segmenting the texture images, and also segmentation performance are
evaluated.
PDE BASED FEATURES FOR TEXTURE ANALYSIS USING WAVELET TRANSFORMIJCI JOURNAL
In the present paper, a novel method of partial differential equation (PDE) based features for texture
analysis using wavelet transform is proposed. The aim of the proposed method is to investigate texture
descriptors that perform better with low computational cost. Wavelet transform is applied to obtain
directional information from the image. Anisotropic diffusion is used to find texture approximation from
directional information. Further, texture approximation is used to compute various statistical features.
LDA is employed to enhance the class separability. The k-NN classifier with tenfold experimentation is
used for classification. The proposed method is evaluated on Brodatz dataset. The experimental results
demonstrate the effectiveness of the method as compared to the other methods in the literature.
This presentation will give a simple overview of image classification technique using difference type software focusing on object-based image classification and segmentation.
Seed net automatic seed generation with deep reinforcement learning for robus...NAVER Engineering
본 논문에서는 interactive segmentation 문제를 풀기 위하여 deep reinforcement learning을 활용한 seed gereration 기법을 제안한다. Interactive segmentation 문제의 이슈 중 하나는 사용자의 개입을 최소화하는 것이다. 본 논문에서 제안하는 시스템이 사용자를 대신하여 인공적인 seed를 생성하게 된다. 사용자는 initial seed 정보만을 제공하면 된다. 우리는 optimal seed point 정의의 모호함으로 인해 supervised 기법을 사용하여 학습하기 어려운 점을 reinforcement learning 기법을 사용하여 극복하였다. Seed generation 문제에 맞도록 MDP를 정의하여 deep-q-network를 성공적으로 학습하였다. 우리는 MSRA10K 데이터셋에 대하여 학습을 진행하여 기존 segmentation 알고리즘의 부정확한 initial 결과 대비 우수한 성능을 보였다.
Texture Unit based Approach to Discriminate Manmade Scenes from Natural Scenesidescitation
In this paper a method is proposed to discriminate
natural and manmade scenes of similar depth. Increase in
image depth leads to increase in roughness in manmade
scenes; on the contrary natural scenes exhibit smooth behavior
at higher image depth. This particular arrangement of pixels
in scene structure can be well explained by local texture
information in a pixel and its neighborhood. Our proposed
method analyses local texture information of a scene image
using texture unit matrix. For final classification we have
used unsupervised learning using Self Organizing Map
(SOM). This technique is useful for online classification due
to very less computational complexity.
Image Segmentation from RGBD Images by 3D Point Cloud Attributes and High-Lev...CSCJournals
In this paper, an approach is developed for segmenting an image into major surfaces and potential objects using RGBD images and 3D point cloud data retrieved from a Kinect sensor. In the proposed segmentation algorithm, depth and RGB data are mapped together. Color, texture, XYZ world coordinates, and normal-, surface-, and graph-based segmentation index features are then generated for each pixel point. These attributes are used to cluster similar points together and segment the image. The inclusion of new depth-related features provided improved segmentation performance over RGB-only algorithms by resolving illumination and occlusion problems that cannot be handled using graph-based segmentation algorithms, as well as accurately identifying pixels associated with the main structure components of rooms (walls, ceilings, floors). Since each segment is a potential object or structure, the output of this algorithm is intended to be used for object recognition. The algorithm has been tested on commercial building images and results show the usability of the algorithm in real time applications.
Abstract: Primarily due to the progresses in super resolution imagery, the methods of segment-based image analysis for generating and updating geographical information are becoming more and more important. This work presents a image segmentation based on colour features with K-means clustering. The entire work is divided into two stages. First enhancement of color separation of satellite image using de correlation stretching is carried out and then the regions are grouped into a set of five classes using K-means clustering algorithm. At first, the spatial data is concentrated focused around every pixel, and at that point two separating procedures are added to smother the impact of pseudoedges. What's more, the spatial data weight is built and grouped with k-means bunching, and the regularization quality in every district is controlled by the bunching focus esteem. The exploratory results, on both reenacted and genuine datasets, demonstrate that the proposed methodology can adequately lessen the pseudoedges of the aggregate variety regularization in the level.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
VARIATION-FREE WATERMARKING TECHNIQUE BASED ON SCALE RELATIONSHIPcsandit
Most watermark methods use pixel values or coefficients as the judgment condition to embed or
extract a watermark image. The variation of these values may lead to the inaccurate condition
such that an incorrect judgment has been laid out. To avoid this problem, we design a stable
judgment mechanism, in which the outcome will not be seriously influenced by the variation.
The principle of judgment depends on the scale relationship of two pixels. From the observation
of common signal processing operations, we can find that the pixel value of processed image
usually keeps stable unless an image has been manipulated by cropping attack or halftone
transformation. This can greatly help reduce the modification strength from image processing
operations. Experiment results show that the proposed method can resist various attacks and
keep the image quality friendly.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
Data-Driven Motion Estimation With Spatial AdaptationCSCJournals
The pel-recursive computation of 2-D optical flow raises a wealth of issues, such as the treatment of outliers, motion discontinuities and occlusion. Our proposed approach deals with these issues within a common framework. It relies on the use of a data-driven technique called Generalised Cross Validation to estimate the best regularisation scheme for a given pixel. In our model, the regularisation parameter is a general matrix whose entries can account for different sources of error. The motion vector estimation takes into consideration local image properties following a spatially adaptive approach where each moving pixel is supposed to have its own regularisation matrix. Preliminary experiments indicate that this approach provides robust estimates of the optical flow.
Satellite Image Enhancement Using Dual Tree Complex Wavelet TransformjournalBEEI
Drawback of losing high frequency components suffers the resolution enhancement. In this project, wavelet domain based image resolution enhancement technique using Dual Tree Complex Wavelet Transform (DT-CWT) is proposed for resolution enhancement of the satellite images. Input images are decomposed by using DT-CWT in this proposed enhancement technique. Inverse DT-CWT is used to generate a new resolution enhanced image from the interpolation of high-frequency sub band images and the input low-resolution image. Intermediate stage has been proposed for estimating the high frequency sub bands to achieve a sharper image. It has been tested on benchmark images from public database. Peak Signal-To-Noise Ratio (PSNR) and visual results show the dominance of the proposed technique over the predictable and state-of-art image resolution enhancement techniques.
Attentive semantic alignment with offset aware correlation kernelsNAVER Engineering
Semantic correspondence is the problem of establishing correspondences across images depicting different instances of the same object or scene class. One of recent approaches to this problem is to estimate parameters of a global transformation model that densely aligns one image to the other. Since an entire correlation map between all feature pairs across images is typically used to predict such a global transformation, noisy features from different backgrounds, clutter, and occlusion distract the predictor from correct estimation of the alignment. This is a challenging issue, in particular, in the problem of semantic correspondence where a large degree of image variations is often involved. In this paper, we introduce an attentive semantic alignment method that focuses on reliable correlations, filtering out distractors. For effective attention, we also propose an offset-aware correlation kernel that learns to capture translation-invariant local transformations in computing correlation values over spatial locations. Experiments demonstrate the effectiveness of the attentive model and offset-aware kernel, and the proposed model combining both techniques achieves the state-of-the-art performance.
MULTIFOCUS IMAGE FUSION USING MULTIRESOLUTION APPROACH WITH BILATERAL GRADIEN...cscpconf
The fusion of two or more images is required for images captured using different sensors,
different modalities or different camera settings to produce the image which is more suitable for
computer processing and human visual perception. The optical lenses in the cameras are having
limited depth of focus so it is not possible to acquire an image that contains all the objects infocus.
In this case we need a Multifocus image fusion technique to create a single image where
all objects are in-focus by combining relevant information in the two or more images. As the
sharp images contain more information than blurred images image sharpness will be taken as
one of the relevant information in framing the fusion rule. Many existing algorithms use
contrast or high local energy as a measure of local sharpness (relevant information). In
practice particularly in multimodal image fusion this assumption is not true. Here in this paper
we are proposing the method which combines the multiresolution transform and local phase
coherence measure to measure the sharpness in the images. The performance of the fusion
process was evaluated with mutual information, edge-association and spatial frequency as
quality metrics and compared with Laplacian pyramid, DWT (Discrete Wavelet Transform) and
bilateral gradient based sharpness criterion methods etc. The results showed that the proposed
algorithm is performing better than the existing ones.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Seed net automatic seed generation with deep reinforcement learning for robus...NAVER Engineering
본 논문에서는 interactive segmentation 문제를 풀기 위하여 deep reinforcement learning을 활용한 seed gereration 기법을 제안한다. Interactive segmentation 문제의 이슈 중 하나는 사용자의 개입을 최소화하는 것이다. 본 논문에서 제안하는 시스템이 사용자를 대신하여 인공적인 seed를 생성하게 된다. 사용자는 initial seed 정보만을 제공하면 된다. 우리는 optimal seed point 정의의 모호함으로 인해 supervised 기법을 사용하여 학습하기 어려운 점을 reinforcement learning 기법을 사용하여 극복하였다. Seed generation 문제에 맞도록 MDP를 정의하여 deep-q-network를 성공적으로 학습하였다. 우리는 MSRA10K 데이터셋에 대하여 학습을 진행하여 기존 segmentation 알고리즘의 부정확한 initial 결과 대비 우수한 성능을 보였다.
Texture Unit based Approach to Discriminate Manmade Scenes from Natural Scenesidescitation
In this paper a method is proposed to discriminate
natural and manmade scenes of similar depth. Increase in
image depth leads to increase in roughness in manmade
scenes; on the contrary natural scenes exhibit smooth behavior
at higher image depth. This particular arrangement of pixels
in scene structure can be well explained by local texture
information in a pixel and its neighborhood. Our proposed
method analyses local texture information of a scene image
using texture unit matrix. For final classification we have
used unsupervised learning using Self Organizing Map
(SOM). This technique is useful for online classification due
to very less computational complexity.
Image Segmentation from RGBD Images by 3D Point Cloud Attributes and High-Lev...CSCJournals
In this paper, an approach is developed for segmenting an image into major surfaces and potential objects using RGBD images and 3D point cloud data retrieved from a Kinect sensor. In the proposed segmentation algorithm, depth and RGB data are mapped together. Color, texture, XYZ world coordinates, and normal-, surface-, and graph-based segmentation index features are then generated for each pixel point. These attributes are used to cluster similar points together and segment the image. The inclusion of new depth-related features provided improved segmentation performance over RGB-only algorithms by resolving illumination and occlusion problems that cannot be handled using graph-based segmentation algorithms, as well as accurately identifying pixels associated with the main structure components of rooms (walls, ceilings, floors). Since each segment is a potential object or structure, the output of this algorithm is intended to be used for object recognition. The algorithm has been tested on commercial building images and results show the usability of the algorithm in real time applications.
Abstract: Primarily due to the progresses in super resolution imagery, the methods of segment-based image analysis for generating and updating geographical information are becoming more and more important. This work presents a image segmentation based on colour features with K-means clustering. The entire work is divided into two stages. First enhancement of color separation of satellite image using de correlation stretching is carried out and then the regions are grouped into a set of five classes using K-means clustering algorithm. At first, the spatial data is concentrated focused around every pixel, and at that point two separating procedures are added to smother the impact of pseudoedges. What's more, the spatial data weight is built and grouped with k-means bunching, and the regularization quality in every district is controlled by the bunching focus esteem. The exploratory results, on both reenacted and genuine datasets, demonstrate that the proposed methodology can adequately lessen the pseudoedges of the aggregate variety regularization in the level.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
VARIATION-FREE WATERMARKING TECHNIQUE BASED ON SCALE RELATIONSHIPcsandit
Most watermark methods use pixel values or coefficients as the judgment condition to embed or
extract a watermark image. The variation of these values may lead to the inaccurate condition
such that an incorrect judgment has been laid out. To avoid this problem, we design a stable
judgment mechanism, in which the outcome will not be seriously influenced by the variation.
The principle of judgment depends on the scale relationship of two pixels. From the observation
of common signal processing operations, we can find that the pixel value of processed image
usually keeps stable unless an image has been manipulated by cropping attack or halftone
transformation. This can greatly help reduce the modification strength from image processing
operations. Experiment results show that the proposed method can resist various attacks and
keep the image quality friendly.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
Data-Driven Motion Estimation With Spatial AdaptationCSCJournals
The pel-recursive computation of 2-D optical flow raises a wealth of issues, such as the treatment of outliers, motion discontinuities and occlusion. Our proposed approach deals with these issues within a common framework. It relies on the use of a data-driven technique called Generalised Cross Validation to estimate the best regularisation scheme for a given pixel. In our model, the regularisation parameter is a general matrix whose entries can account for different sources of error. The motion vector estimation takes into consideration local image properties following a spatially adaptive approach where each moving pixel is supposed to have its own regularisation matrix. Preliminary experiments indicate that this approach provides robust estimates of the optical flow.
Satellite Image Enhancement Using Dual Tree Complex Wavelet TransformjournalBEEI
Drawback of losing high frequency components suffers the resolution enhancement. In this project, wavelet domain based image resolution enhancement technique using Dual Tree Complex Wavelet Transform (DT-CWT) is proposed for resolution enhancement of the satellite images. Input images are decomposed by using DT-CWT in this proposed enhancement technique. Inverse DT-CWT is used to generate a new resolution enhanced image from the interpolation of high-frequency sub band images and the input low-resolution image. Intermediate stage has been proposed for estimating the high frequency sub bands to achieve a sharper image. It has been tested on benchmark images from public database. Peak Signal-To-Noise Ratio (PSNR) and visual results show the dominance of the proposed technique over the predictable and state-of-art image resolution enhancement techniques.
Attentive semantic alignment with offset aware correlation kernelsNAVER Engineering
Semantic correspondence is the problem of establishing correspondences across images depicting different instances of the same object or scene class. One of recent approaches to this problem is to estimate parameters of a global transformation model that densely aligns one image to the other. Since an entire correlation map between all feature pairs across images is typically used to predict such a global transformation, noisy features from different backgrounds, clutter, and occlusion distract the predictor from correct estimation of the alignment. This is a challenging issue, in particular, in the problem of semantic correspondence where a large degree of image variations is often involved. In this paper, we introduce an attentive semantic alignment method that focuses on reliable correlations, filtering out distractors. For effective attention, we also propose an offset-aware correlation kernel that learns to capture translation-invariant local transformations in computing correlation values over spatial locations. Experiments demonstrate the effectiveness of the attentive model and offset-aware kernel, and the proposed model combining both techniques achieves the state-of-the-art performance.
MULTIFOCUS IMAGE FUSION USING MULTIRESOLUTION APPROACH WITH BILATERAL GRADIEN...cscpconf
The fusion of two or more images is required for images captured using different sensors,
different modalities or different camera settings to produce the image which is more suitable for
computer processing and human visual perception. The optical lenses in the cameras are having
limited depth of focus so it is not possible to acquire an image that contains all the objects infocus.
In this case we need a Multifocus image fusion technique to create a single image where
all objects are in-focus by combining relevant information in the two or more images. As the
sharp images contain more information than blurred images image sharpness will be taken as
one of the relevant information in framing the fusion rule. Many existing algorithms use
contrast or high local energy as a measure of local sharpness (relevant information). In
practice particularly in multimodal image fusion this assumption is not true. Here in this paper
we are proposing the method which combines the multiresolution transform and local phase
coherence measure to measure the sharpness in the images. The performance of the fusion
process was evaluated with mutual information, edge-association and spatial frequency as
quality metrics and compared with Laplacian pyramid, DWT (Discrete Wavelet Transform) and
bilateral gradient based sharpness criterion methods etc. The results showed that the proposed
algorithm is performing better than the existing ones.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
CPlaNet: Enhancing Image Geolocalization by Combinatorial Partitioning of MapsNAVER Engineering
Image geolocalization is the task of identifying the location depicted in a photo based only on its visual information. This task is inherently challenging since many photos have only few, possibly ambiguous cues to their geolocation. Recent work has cast this task as a classification problem by partitioning the earth into a set of discrete cells that correspond to geographic regions. The granularity of this partitioning presents a critical trade-off; using fewer but larger cells results in lower location accuracy while using more but smaller cells reduces the number of training examples per class and increases model size, making the model prone to overfitting. To tackle this issue, we propose a simple but effective algorithm, combinatorial partitioning, which generates a large number of fine-grained output classes by intersecting multiple coarse-grained partitionings of the earth. Each classifier votes for the fine-grained classes that overlap with their respective coarse-grained ones. This technique allows us to predict locations at a fine scale while maintaining sufficient training examples per class. Our algorithm achieves the state-of-the-art performance in location recognition on multiple benchmark datasets.
Remote sensing –Beyond images
Mexico 14-15 December 2013
The workshop was organized by CIMMYT Global Conservation Agriculture Program (GCAP) and funded by the Bill & Melinda Gates Foundation (BMGF), the Mexican Secretariat of Agriculture, Livestock, Rural Development, Fisheries and Food (SAGARPA), the International Maize and Wheat Improvement Center (CIMMYT), CGIAR Research Program on Maize, the Cereal System Initiative for South Asia (CSISA) and the Sustainable Modernization of the Traditional Agriculture (MasAgro)
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Tarabalka_IGARSS.pdf
1. Best Merge Region Growing
with Integrated Probabilistic Classification
for Hyperspectral Imagery
Yuliya Tarabalka and James C. Tilton
NASA Goddard Space Flight Center,
Mail Code 606.3, Greenbelt, MD 20771, USA
e-mail: yuliya.tarabalka@nasa.gov
July 28, 2011
2. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Outline
1 Introduction
2 Proposed spectral-spatial classification scheme
3 Conclusions and perspectives
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 2
3. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Hyperspectral image
Every pixel contains a detailed spectrum (>100 spectral bands)
+ More information per pixel → increasing capability to distinguish
objects
− Dimensionality increases → image analysis becomes more complex
⇓
Advanced algorithms are required!
λ pixel vector xi
samples
x1 x2 x3 xi xi
x2i
Tens or xn
hundreds of
bands lines wavelength λ
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 3
4. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Supervised classification problem
AVIRIS image Ground-truth Task
Spatial resolution: 20m/pix
Spectral resolution: 200 bands data
16 classes: corn-no till, corn-min till, corn, soybeans-no till, soybeans-min
till, soybeans-clean till, alfalfa, grass/pasture, grass/trees,
grass/pasture-mowed, hay-windrowed, oats, wheat, woods,
bldg-grass-tree-drives, stone-steel towers
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 4
5. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Classification approaches
Only spectral information
Pixelwise approach
Spectrum of each pixel is analyzed
SVM and kernel-based methods
→ good classification accuracies
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 5
6. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Classification approaches
Only spectral information
Pixelwise approach
Spectrum of each pixel is analyzed
SVM and kernel-based methods
→ good classification accuracies
Spectral + spatial information
Info about spatial structures is included
Because neighboring pixels are related
How to extract spatial information?
How to combine spectral and spatial
information?
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 5
7. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Our previous research
Segment a hyperspectral image into homogeneous regions
Each region = adaptive neighborhood for all the pixels within the region
Spectral info + segmentation map → classify image
1 1 1 1 1 1
1 1 1 2 2 2
Segmentation
1 1 2 2 2 2
map
(3 spatial regions) 1 1 2 2 2 2
1 1 3 3 3 2
1 3 3 3 3 3
3 3 3 3 3 3
Combination of Result of
Pixelwise segmentation and spectral-spatial
classification map pixelwise classification
(dark blue, white classification results (classification
and light grey (majority vote within map after majority
classes) 3 spatial regions) vote)
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 6
8. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Our previous research
Segment a hyperspectral image into homogeneous regions
Each region = adaptive neighborhood for all the pixels within the region
Spectral info + segmentation map → classify image
1 1 1 1 1 1
1 1 1 2 2 2
Segmentation
1 1 2 2 2 2
map
(3 spatial regions) 1 1 2 2 2 2
1 1 3 3 3 2
1 3 3 3 3 3
3 3 3 3 3 3
Combination of Result of
Pixelwise segmentation and spectral-spatial
classification map pixelwise classification
(dark blue, white classification results (classification
and light grey (majority vote within map after majority
classes) 3 spatial regions) vote)
Unsupervised segmentation: dependence on the chosen measure of
homogeneity
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 6
9. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Our previous research: Marker-controlled segmentation
Probabilistic pixelwise Markers = the Marker-
SVM classification most reliably controlled region
classified pixels growing
Classification map Probability map
⇒ ⇒
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 7
10. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Our previous research: Marker-controlled segmentation
Probabilistic pixelwise Markers = the Marker-
SVM classification most reliably controlled region
classified pixels growing
Classification map Probability map
⇒ ⇒
Drawback: strong dependence on the performance of the selected
probabilistic classifier
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 7
11. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Objective
Perform segmentation and classification concurrently
→ best merge region growing with integrated classification
⇑ ⇑
Dissimilarity criterion? Convergence criterion?
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 8
12. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Input
Hyperspectral image
Preliminary
probabilistic
classification
Each pixel =
B-band hyperspectral image one region
X = {xj ∈ RB , j = 1, 2, ..., n}
While
not converged
B ∼ 100
Find min(DC) between
all pairs of spatially
adjacent regions (SAR)
Merge all pairs of SAR
with DC = min(DC).
Classify new regions
Spectral-spatial
classification map
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 9
13. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Preliminary probabilistic classification
Kernel-based SVM classifier* → well suited for Hyperspectral image
hyperspectral images Preliminary
probabilistic
classification
Output: Each pixel =
one region
• classification map • for each pixel xj : While
not converged
L = {Lj , j = 1, ..., n}
a vector of K class Find min(DC) between
all pairs of spatially
probabilities adjacent regions (SAR)
Merge all pairs of SAR
{P (Lj = k|xj ), with DC = min(DC).
Classify new regions
k = 1, ..., K}
Spectral-spatial
classification map
*C. Chang and C. Lin, "LIBSVM: A library for Support Vector Machines," Software available at
http://www.csie.ntu.edu.tw/∼cjlin/libsvm, 2011.
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 10
14. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Hierarchical step-wise optimization with classification
1 Each pixel xi = one region Ri Hyperspectral image
preliminary class label L(Ri )
Preliminary
class probabilities probabilistic
classification
{Pk (Ri ) = P (L(Ri ) = k|Ri ), k = 1, ..., K} Each pixel =
one region
While
not converged
1
2
3 Find min(DC) between
all pairs of spatially
adjacent regions (SAR)
4
5
6 Merge all pairs of SAR
with DC = min(DC).
Classify new regions
7
8
9 Spectral-spatial
10
11
12
classification map
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 11
15. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Hierarchical step-wise optimization with classification
2 Calculate Dissimilarity Criterion (DC) between
Hyperspectral image
spatially adjacent regions
Preliminary
DC = function of region statistical, geometrical probabilistic
classification
and classification features Each pixel =
one region
1
?
2
3
While
not converged
Find min(DC) between
all pairs of spatially
4
5
6
adjacent regions (SAR)
Merge all pairs of SAR
with DC = min(DC).
Classify new regions
7
8
9
Spectral-spatial
classification map
10
11
?
12
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 12
16. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Hierarchical step-wise optimization with classification
2 Calculate Dissimilarity Criterion
between adjacent regions:
Compute Spectral Angle Mapper Compute SAM between region
mean vectors SAM(ui, uj)
between the region mean vectors
ui and uj no yes
L(Ri) = L(Rj) = k’
ui · uj
SAM(ui , uj ) = arccos( )
ui 2 uj 2 no (card(Ri) > M) & yes DC = (2 – max(Pk’(Ri),
(card(Rj) > M) Pk’(Rj)))SAM(ui, uj)
If adjacent regions have equal class DC = ∞
labels → they belong more likely to
DC = (2 – min(PL(Rj)(Ri),
the same region: PL(Ri)(Rj)))SAM(ui, uj)
DC = (2−max(Pk (Ri ), Pk (Rj ))) ·
SAM(ui , uj )
DC
If two large regions are assigned to
different classes → they cannot be
merged together
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 13
17. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Hierarchical step-wise optimization with classification
2 Calculate Dissimilarity Criterion
between adjacent regions:
Compute Spectral Angle Mapper Compute SAM between region
mean vectors SAM(ui, uj)
between the region mean vectors
ui and uj no yes
L(Ri) = L(Rj) = k’
ui · uj
SAM(ui , uj ) = arccos( )
ui 2 uj 2 no (card(Ri) > M) & yes DC = (2 – max(Pk’(Ri),
(card(Rj) > M) Pk’(Rj)))SAM(ui, uj)
If adjacent regions have equal class DC = ∞
labels → they belong more likely to
DC = (2 – min(PL(Rj)(Ri),
the same region: PL(Ri)(Rj)))SAM(ui, uj)
DC = (2−max(Pk (Ri ), Pk (Rj ))) ·
SAM(ui , uj )
DC
If two large regions are assigned to
different classes → they cannot be
merged together
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 13
18. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Hierarchical step-wise optimization with classification
2 Calculate Dissimilarity Criterion
between adjacent regions:
Compute Spectral Angle Mapper Compute SAM between region
mean vectors SAM(ui, uj)
between the region mean vectors
ui and uj no yes
L(Ri) = L(Rj) = k’
ui · uj
SAM(ui , uj ) = arccos( )
ui 2 uj 2 no (card(Ri) > M) & yes DC = (2 – max(Pk’(Ri),
(card(Rj) > M) Pk’(Rj)))SAM(ui, uj)
If adjacent regions have equal class DC = ∞
labels → they belong more likely to
DC = (2 – min(PL(Rj)(Ri),
the same region: PL(Ri)(Rj)))SAM(ui, uj)
DC = (2−max(Pk (Ri ), Pk (Rj ))) ·
SAM(ui , uj )
DC
If two large regions are assigned to
different classes → they cannot be
merged together
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 13
19. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Hierarchical step-wise optimization with classification
2 Calculate Dissimilarity Criterion
between adjacent regions:
Compute Spectral Angle Mapper Compute SAM between region
mean vectors SAM(ui, uj)
between the region mean vectors
ui and uj no yes
L(Ri) = L(Rj) = k’
ui · uj
SAM(ui , uj ) = arccos( )
ui 2 uj 2 no (card(Ri) > M) & yes DC = (2 – max(Pk’(Ri),
(card(Rj) > M) Pk’(Rj)))SAM(ui, uj)
If adjacent regions have equal class DC = ∞
labels → they belong more likely to
DC = (2 – min(PL(Rj)(Ri),
the same region: PL(Ri)(Rj)))SAM(ui, uj)
DC = (2−max(Pk (Ri ), Pk (Rj ))) ·
SAM(ui , uj )
DC
If two large regions are assigned to
different classes → they cannot be
merged together
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 13
20. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Hierarchical step-wise optimization with classification
2 Calculate Dissimilarity Criterion
between adjacent regions:
Compute Spectral Angle Mapper Compute SAM between region
mean vectors SAM(ui, uj)
between the region mean vectors
ui and uj no yes
L(Ri) = L(Rj) = k’
ui · uj
SAM(ui , uj ) = arccos( )
ui 2 uj 2 no (card(Ri) > M) & yes DC = (2 – max(Pk’(Ri),
(card(Rj) > M) Pk’(Rj)))SAM(ui, uj)
If adjacent regions have equal class
labels → they belong more likely to DC = ∞
the same region
DC = (2 – min(PL(Rj)(Ri),
If two large regions are assigned to PL(Ri)(Rj)))SAM(ui, uj)
different classes → they cannot be
merged together DC
If two regions have different class
labels → DC between them is
penalized by
(2 − min(PL(Rj ) (Ri ), PL(Ri ) (Rj )))
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 14
21. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Hierarchical step-wise optimization with classification
Hyperspectral image
3 Find the smallest dissimilarity criterion DCmin Preliminary
probabilistic
classification
Each pixel =
one region
1
2
3 While
not converged
Find min(DC) between
4
5
6 all pairs of spatially
adjacent regions (SAR)
Merge all pairs of SAR
with DC = min(DC).
7
8
9
Classify new regions
DCmin Spectral-spatial
10
11
12
classification map
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 15
22. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Hierarchical step-wise optimization with classification
4 Merge all pairs of spatially adjacent regions Hyperspectral image
with DC = DCmin . Preliminary
probabilistic
classification
Each pixel =
For each new region Rnew = Ri + Rj : one region
While
not converged
Pk (Ri )car d(Ri ) + Pk (Rj )car d(Rj ) Find min(DC) between
Pk (Rnew ) = all pairs of spatially
adjacent regions (SAR)
car d(Rnew )
Merge all pairs of SAR
with DC = min(DC).
Classify new regions
L(Rnew ) = arg max{Pk (Rnew )}
k Spectral-spatial
classification map
All the pixels in Rnew get a definite class label.
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 16
23. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Hierarchical step-wise optimization with classification
4 Merge all pairs of spatially adjacent regions
with DC = DCmin .
1
2
3
For each new region Rnew = Ri + Rj :
4
5
6
Pk (Ri )car d(Ri ) + Pk (Rj )car d(Rj )
Rnew
8
9
Pk (Rnew ) =
car d(Rnew )
11
12
L(Rnew ) = arg max{Pk (Rnew )}
k
All the pixels in Rnew get a definite class label.
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 16
24. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Hierarchical step-wise optimization with classification
Hyperspectral image
Preliminary
2 Calculate Dissimilarity Criterion between probabilistic
classification
adjacent regions
Each pixel =
one region
3 Find the smallest dissimilarity criterion DCmin While
not converged
Find min(DC) between
all pairs of spatially
4 Merge all pairs of spatially adjacent regions adjacent regions (SAR)
with DC = DCmin Merge all pairs of SAR
with DC = min(DC).
Classify new regions
5 Stop if all n pixels get a definite class label. Spectral-spatial
classification map
If not converged, go to step 2
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 17
25. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Classification maps
SVM Proposed HSwC method
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 18
27. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Conclusions and perspectives
Conclusions
1 New spectral-spatial classification method for hyperspectral images
was proposed
2 New dissimilarity criterion between image regions was defined
3 The proposed method:
improves classification accuracies
provides classification maps with homogeneous regions
Perspectives
Explore further the choice of:
optimal representative features for segmentation regions
dissimilarity measures between regions
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 20
28. Introduction
Proposed spectral-spatial classification scheme
Conclusions and perspectives
Thank you for your attention!
Yuliya Tarabalka and James C. Tilton (yuliya.tarabalka@nasa.gov) Best merge region growing with integrated classification for HS data 21
29. Best Merge Region Growing
with Integrated Probabilistic Classification
for Hyperspectral Imagery
Yuliya Tarabalka and James C. Tilton
NASA Goddard Space Flight Center,
Mail Code 606.3, Greenbelt, MD 20771, USA
e-mail: yuliya.tarabalka@nasa.gov
July 28, 2011