A quantitative evaluation methodology for disparity maps includes the selection of an error measure. Among existing measures, the percentage of bad matched pixels is commonly used. Nevertheless, it requires an error threshold. Thus, a score of zero bad matched pixels does not necessarily imply that a disparity map is free of errors. On the other hand, we have not found publications on the evaluation process where different error measures are applied. In this paper, error measures are characterised in order to provide the bases to select a measure during the evaluation process. An analysis of the impact on results of selecting different error measures on the evaluation of disparity maps is conducted based on the presented characterisation. The evaluation results showed that there is a lack of consistency on the results achieved by considering different error measures. It has an impact on interpreting the accuracy of stereo correspondence algorithms.
A comparison of stereo correspondence algorithms can be conducted by a quantitative evaluation of disparity maps. Among the existing evaluation methodologies, the Middlebury’s methodology is commonly used. However, the Middlebury’s methodology has shortcomings in the evaluation model and the error measure. These shortcomings may bias the evaluation results, and make a fair judgment about algorithms accuracy difficult. An alternative, the methodology is based on a multiobjective optimisation model that only provides a subset of algorithms with comparable accuracy. In this paper, a quantitative evaluation of disparity maps is proposed. It performs an exhaustive assessment of the entire set of algorithms. As innovative aspect, evaluation results are shown and analysed as disjoint groups of stereo correspondence algorithms with comparable accuracy. This innovation is obtained by a partitioning and grouping algorithm. On the other hand, the used error measure offers advantages over the error measure used in the Middlebury’s methodology. The experimental validation is based on the Middlebury’s test-bed and algorithms repository. The obtained results show seven groups with different accuracies. Moreover, the top-ranked stereo correspondence algorithms by the Middlebury’s methodology are not necessarily the most accurate in the proposed methodology
A comparison of stereo correspondence algorithms can be conducted by a quantitative evaluation of
disparity maps. Among the existing evaluation methodologies, the Middlebury’s methodology is commonly
used. However, the Middlebury’s methodology has shortcomings in the evaluation model and the error
measure. These shortcomings may bias the evaluation results, and make a fair judgment about algorithms
accuracy difficult. An alternative, the A* methodology is based on a multiobjective optimisation model that
only provides a subset of algorithms with comparable accuracy. In this paper, a quantitative evaluation of
disparity maps is proposed. It performs an exhaustive assessment of the entire set of algorithms. As
innovative aspect, evaluation results are shown and analysed as disjoint groups of stereo correspondence
algorithms with comparable accuracy. This innovation is obtained by a partitioning and grouping algorithm.
On the other hand, the used error measure offers advantages over the error measure used in the
Middlebury’s methodology. The experimental validation is based on the Middlebury’s test-bed and
algorithms repository. The obtained results show seven groups with different accuracies. Moreover, the topranked
stereo correspondence algorithms by the Middlebury’s methodology are not necessarily the most
accurate in the proposed methodology.
An error measure for evaluating disparity maps is presented. It offers advantages over conventional ground-truth based error measures.
Cabezas, I.; Padilla, V. & Trujillo, M. (2011), A Measure for Accuracy Disparity Maps Evaluation., in César San Martín & Sang-Woon Kim, ed., 'CIARP' , Springer, , pp. 223-231 .
Detection and rectification of distorted fingerprintsRaja Ram
This document proposes algorithms to detect and rectify distortions in fingerprints caused by skin elasticity. It views distortion detection as a classification problem using ridge orientation and period maps as features, and trains a SVM classifier. Distortion rectification is viewed as a regression problem where the input fingerprint is matched to reference fingerprints in a database to estimate the distortion field and rectify the input fingerprint. The methods were evaluated on three databases containing distorted fingerprints and showed promising results in improving matching accuracy of distorted fingerprints.
Detection and Rectification of Distorted Fingerprints1crore projects
This document proposes algorithms to detect and rectify skin distortion in fingerprints based on a single fingerprint image. Distortion detection is viewed as a two-class classification problem using ridge orientation and period maps as features, classified by an SVM. Distortion rectification is viewed as a regression problem, where a reference database of distorted fingerprints and distortion fields is used to estimate the distortion field for the input fingerprint. The algorithms were evaluated on three fingerprint databases and showed promising results for improving the matching accuracy of distorted fingerprints.
Face Recognition System using Self Organizing Feature Map and Appearance Base...ijtsrd
Face Recognition has develop one of the most effective presentations of image analysis. This area of research is important not only for the applications in human computer interaction, biometric and security but also in other pattern classification problem. To improve face recognition in this system, two methods are used PCA Principal component analysis and SOM Self organizing feature Map .PCA is a subspace projection method is used compress the input face image. SOM method is used to classify DCT based feature vectors into groups to identify if the subject in the input image is "present" or "not present" in the image database. The aim of this system is that input image has to compare with stored images in the database using PCA and SOM method. An image database of 100 face images is evaluated containing 10 subjects and each subject having 10 images with different facial expression. This system is evaluated by measuring the accuracy of recognition rate. This system has been implemented by MATLAB programming. Thaung Yin | Khin Moh Moh Than | Win Tun "Face Recognition System using Self-Organizing Feature Map and Appearance-Based Approach" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26691.pdfPaper URL: https://www.ijtsrd.com/computer-science/cognitive-science/26691/face-recognition-system-using-self-organizing-feature-map-and-appearance-based-approach/thaung-yin
Happiness Expression Recognition at Different Age ConditionsEditor IJMTER
This document proposes a new robust subspace method called Proposed Euclidean Distance Score Level Fusion (PEDSLF) for recognizing happiness facial expressions with age variations. PEDSLF performs score level fusion of three subspace methods - PCA, ICA, and SVD. It normalizes the scores from each method and takes their maximum value for classification. The method is tested on two databases from FGNET and achieves recognition rates of 81.8% for ages 1-5 training and 10-15 testing, and 72% for ages 20-25 training and 30-35 testing. The results show PEDSLF performs better than the individual subspace methods for facial expression recognition with age variations.
A comparison of stereo correspondence algorithms can be conducted by a quantitative evaluation of disparity maps. Among the existing evaluation methodologies, the Middlebury’s methodology is commonly used. However, the Middlebury’s methodology has shortcomings in the evaluation model and the error measure. These shortcomings may bias the evaluation results, and make a fair judgment about algorithms accuracy difficult. An alternative, the methodology is based on a multiobjective optimisation model that only provides a subset of algorithms with comparable accuracy. In this paper, a quantitative evaluation of disparity maps is proposed. It performs an exhaustive assessment of the entire set of algorithms. As innovative aspect, evaluation results are shown and analysed as disjoint groups of stereo correspondence algorithms with comparable accuracy. This innovation is obtained by a partitioning and grouping algorithm. On the other hand, the used error measure offers advantages over the error measure used in the Middlebury’s methodology. The experimental validation is based on the Middlebury’s test-bed and algorithms repository. The obtained results show seven groups with different accuracies. Moreover, the top-ranked stereo correspondence algorithms by the Middlebury’s methodology are not necessarily the most accurate in the proposed methodology
A comparison of stereo correspondence algorithms can be conducted by a quantitative evaluation of
disparity maps. Among the existing evaluation methodologies, the Middlebury’s methodology is commonly
used. However, the Middlebury’s methodology has shortcomings in the evaluation model and the error
measure. These shortcomings may bias the evaluation results, and make a fair judgment about algorithms
accuracy difficult. An alternative, the A* methodology is based on a multiobjective optimisation model that
only provides a subset of algorithms with comparable accuracy. In this paper, a quantitative evaluation of
disparity maps is proposed. It performs an exhaustive assessment of the entire set of algorithms. As
innovative aspect, evaluation results are shown and analysed as disjoint groups of stereo correspondence
algorithms with comparable accuracy. This innovation is obtained by a partitioning and grouping algorithm.
On the other hand, the used error measure offers advantages over the error measure used in the
Middlebury’s methodology. The experimental validation is based on the Middlebury’s test-bed and
algorithms repository. The obtained results show seven groups with different accuracies. Moreover, the topranked
stereo correspondence algorithms by the Middlebury’s methodology are not necessarily the most
accurate in the proposed methodology.
An error measure for evaluating disparity maps is presented. It offers advantages over conventional ground-truth based error measures.
Cabezas, I.; Padilla, V. & Trujillo, M. (2011), A Measure for Accuracy Disparity Maps Evaluation., in César San Martín & Sang-Woon Kim, ed., 'CIARP' , Springer, , pp. 223-231 .
Detection and rectification of distorted fingerprintsRaja Ram
This document proposes algorithms to detect and rectify distortions in fingerprints caused by skin elasticity. It views distortion detection as a classification problem using ridge orientation and period maps as features, and trains a SVM classifier. Distortion rectification is viewed as a regression problem where the input fingerprint is matched to reference fingerprints in a database to estimate the distortion field and rectify the input fingerprint. The methods were evaluated on three databases containing distorted fingerprints and showed promising results in improving matching accuracy of distorted fingerprints.
Detection and Rectification of Distorted Fingerprints1crore projects
This document proposes algorithms to detect and rectify skin distortion in fingerprints based on a single fingerprint image. Distortion detection is viewed as a two-class classification problem using ridge orientation and period maps as features, classified by an SVM. Distortion rectification is viewed as a regression problem, where a reference database of distorted fingerprints and distortion fields is used to estimate the distortion field for the input fingerprint. The algorithms were evaluated on three fingerprint databases and showed promising results for improving the matching accuracy of distorted fingerprints.
Face Recognition System using Self Organizing Feature Map and Appearance Base...ijtsrd
Face Recognition has develop one of the most effective presentations of image analysis. This area of research is important not only for the applications in human computer interaction, biometric and security but also in other pattern classification problem. To improve face recognition in this system, two methods are used PCA Principal component analysis and SOM Self organizing feature Map .PCA is a subspace projection method is used compress the input face image. SOM method is used to classify DCT based feature vectors into groups to identify if the subject in the input image is "present" or "not present" in the image database. The aim of this system is that input image has to compare with stored images in the database using PCA and SOM method. An image database of 100 face images is evaluated containing 10 subjects and each subject having 10 images with different facial expression. This system is evaluated by measuring the accuracy of recognition rate. This system has been implemented by MATLAB programming. Thaung Yin | Khin Moh Moh Than | Win Tun "Face Recognition System using Self-Organizing Feature Map and Appearance-Based Approach" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26691.pdfPaper URL: https://www.ijtsrd.com/computer-science/cognitive-science/26691/face-recognition-system-using-self-organizing-feature-map-and-appearance-based-approach/thaung-yin
Happiness Expression Recognition at Different Age ConditionsEditor IJMTER
This document proposes a new robust subspace method called Proposed Euclidean Distance Score Level Fusion (PEDSLF) for recognizing happiness facial expressions with age variations. PEDSLF performs score level fusion of three subspace methods - PCA, ICA, and SVD. It normalizes the scores from each method and takes their maximum value for classification. The method is tested on two databases from FGNET and achieves recognition rates of 81.8% for ages 1-5 training and 10-15 testing, and 72% for ages 20-25 training and 30-35 testing. The results show PEDSLF performs better than the individual subspace methods for facial expression recognition with age variations.
Los descriptores de textura son métricas diseñadas para cuantificar la textura percibida en una imagen. Se utilizan para clasificar y segmentar imágenes basadas en las propiedades de textura. Algunos métodos comunes de descriptores de textura son Local Binary Pattern (LBP), matriz de co-ocurrencia, espectro de textura (TS) y Haralick. Estos descriptores se aplican comúnmente en el análisis de imágenes médicas y el reconocimiento de caracteres.
Se presenta un método para la identificación automática de células epiteliales en tejidos de histología. Trabajo presentado en el marco del VIII Congreso Colombiano de Morfología -2012
This document discusses 3D reconstruction from 2D images. It introduces the concepts of direct and inverse problems as they relate to 3D reconstruction, with small changes in input data potentially leading to large changes in the reconstructed 3D model. Stereo image capture and camera calibration are also covered, including intrinsic and extrinsic camera parameters as well as epipolar geometry, which describes the geometric relationship between two cameras.
This document compares different block-matching motion estimation algorithms. It introduces block-matching motion estimation and describes popular distortion metrics like MSE and SAD. It then explains the full-search algorithm and more efficient algorithms like three-step search and four-step search that evaluate fewer candidate blocks to reduce computational cost. These algorithms are evaluated and compared using video test sequences to analyze their performance and quality.
Este documento propone utilizar técnicas de template matching para identificar patrones en imágenes de hígado. Se analizarán y evaluarán algoritmos como SAD, CCC y NCC para encontrar regiones similares a un patrón de referencia. Se aplicarán filtros como bilateral y promedio, y el operador de Sobel para preprocesar las imágenes. Los resultados se validarán con un especialista para apoyar el diagnóstico médico.
Este documento presenta una contribución a la caracterización del descriptor visual de color del estándar MPEG-7. Explica brevemente los objetivos generales y específicos del proyecto, introduce conceptos clave como MPEG, MPEG-7, descriptores visuales y descriptores de color. También describe herramientas de software y métodos para la caracterización y pruebas de los descriptores de color de MPEG-7.
Stereo vision is related to the estimation of the depth of a scene captured, simultaneously, from different points of view. A fundamental problem in stereo vision is the search of corresponding points. A pair of corresponding points is formed by the projections of a same point in space. Find pairs of corresponding points allows to estimate the depth through of triangulation. Dynamic Programming is a efficient method for the search of pairs of corresponding points. In this paper are used different aspects of approaches which used Dynamic Programming for the search of pairs of corresponding points
Electronic microscopes are tools for capturing multimedia information that provide an alternative solution to several problems. Char coal classification is carried out manually by observing its morphological characteristics. In this process is necessary to analyse at least five hundred particles. As an alternative, the automation requires the use of image processing techniques. The char images acquisition is carried out automatically using an electronic microscope with motorized stage. In this process blur, empty and fragment particles images are captured. Including all these images in the classification process imply an additional effort during the process. In particular, the blur images may produce quantification errors in the quantification of the morphological characteristics. In this article a method, based on gradient magnitude and saturation for automatic identification of blur images and images with little content, is presented as a first step towards automatic classification process. Experimental results shown that the proposed method detects 70% of blur images and 95% of images with little content
La clasificación de carbonizados se realiza, generalmente, de forma manual mediante el análisis de las características morfológicas de al menos 500 partículas. Existen varias propuestas de clasificación semiautomática y automática usando técnicas de procesamiento de imágenes, sin embargo es poca la atención prestada al preprocesamiento de las imágenes. Las imágenes de carbonizados, normalmente empleadas para la clasificación automática, son de alta resolución (1300x1030 píxeles). Adicionalmente, analizar 500 partículas implica procesar al menos 290 imágenes para clasificar una muestra. En este artículo, se analiza el uso del sub-muestreo para reducir la resolución de las imágenes y su impacto sobre la clasificación de los carbonizados. Los resultados experimentales muestran que una reducción en el tamaño de las imágenes, a la mitad reduce hasta en un 69.19% el tiempo de procesamiento y no afecta la clasificación final de la muestra
Char classification process is based on morphological characteristics, such as: number of pores, distribution of pores and all thickness. Approximately, five hundred images have to be analysing in order to classify a char sample. Frequently, these images have high spatial resolution, 1300 x 1030 pixels, and intensity levels are represented using 8 bits. Thus, char image applications require large storage and processing capacity. In this paper, we compare different subsampling and quantisation strategies in order to reduce the spatial resolution and the number of bits used. Compared strategies showed excellent results in reducing spatial resolution and intensity levels, with minimal loss of information or details in processed images
Images are retrieved from a repository using MPEG-7 visual descriptors. The MPEG-7 standard uses XML documents for
storing descriptors of multimedia content. The MPEG-7 standard does not define a model for mapping XML documents into a
database. However, XML documents can be considered as a database. An XML document is self-describing and portable data
collection that has a data structure of a tree or a graph. An XML document collection can be semi-structured and this quality
allows grouping XML documents without a schema that relate them. There are two possible database models: the Native XML
and the Relational. A database model for XML documents is selected based on the purpose of information use and database
requirements. In this paper, both models are described and analysed. A relational database schema is designed for mapping
MPEG-7 visual descriptors into a database
Resource-Oriented Architecture offers advantages over other web-service architectures. It is based on a simple, scalable and highly standardised application-level protocol. Multimedia content is commonly managed using the MPEG-7. The MPEG-7 is a standard for representing audiovisual information that satisfies specific requirements based on syntax, semantic and decoding. Content descriptions under MPEG-7 can be organised and characterized without ambiguity. The MPEG-7 eXperimental Model (XM) includes the best performing tools for MPEG-7 normative and non-normative elements. In this paper, multimedia content is managed using the MPEG-7 eXperimental Model functionalities and provided using web-services technology. RESTful principles are the guidelines for achieving multimedia content storage and retrieval. Quantitative evaluation of the proposed web services has shown that this approach has better performance, in term of retrieval speed and storage space
Multimedia content is extracted automatically using MPEG-7 visual descriptors. The MPEG-7 uses an extended XML standard for defining structural relation between descriptors allowing creation and modification of description schemes. MPEG-7 visual descriptors are numerical representations of features - such as: texture, shape and color - extracted from an image. In this paper, the MPEG-7 is conceived as a set of services for extracting and storing visual descriptors. The MPEg-7 text-annotation tool is used for semantic descriptions. Semantic descriptions are linked to images content and conceived as a service for annotating and storing. A framework using service oriented architecture for mapping semantic descriptions and MPEG-7 visual descriptors into a pure-relational model is proposed.
The camera calibration problem consists in estimating the intrinsic and the extrinsic parameters. This problem can be solved by computing the fundamental matrix. The fundamental matrix can be obtained from a set of corresponding points. However in practice, corresponding points may be inaccurately estimated, falsely matched or badly located, due to occlusion and ambiguity, among others. On the other hand, if the set of corresponding points does not include information on different depth planes, the estimated fundamental matrix may not be able to correctly recover the epipolar geometry. In this paper a method for estimating the fundamental matrix is introduced. The estimation problem is posed as finding a set of corresponding points. Fundamental matrices are estimated using subsets of corresponding points and an optimisation criterion is used to select the best estimated fundamental matrix. The experimental evaluation shows that the least range of residuals is a tolerant criterion to large baselines.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
More Related Content
More from Multimedia and Vision Laboratory at Universidad del Valle
Los descriptores de textura son métricas diseñadas para cuantificar la textura percibida en una imagen. Se utilizan para clasificar y segmentar imágenes basadas en las propiedades de textura. Algunos métodos comunes de descriptores de textura son Local Binary Pattern (LBP), matriz de co-ocurrencia, espectro de textura (TS) y Haralick. Estos descriptores se aplican comúnmente en el análisis de imágenes médicas y el reconocimiento de caracteres.
Se presenta un método para la identificación automática de células epiteliales en tejidos de histología. Trabajo presentado en el marco del VIII Congreso Colombiano de Morfología -2012
This document discusses 3D reconstruction from 2D images. It introduces the concepts of direct and inverse problems as they relate to 3D reconstruction, with small changes in input data potentially leading to large changes in the reconstructed 3D model. Stereo image capture and camera calibration are also covered, including intrinsic and extrinsic camera parameters as well as epipolar geometry, which describes the geometric relationship between two cameras.
This document compares different block-matching motion estimation algorithms. It introduces block-matching motion estimation and describes popular distortion metrics like MSE and SAD. It then explains the full-search algorithm and more efficient algorithms like three-step search and four-step search that evaluate fewer candidate blocks to reduce computational cost. These algorithms are evaluated and compared using video test sequences to analyze their performance and quality.
Este documento propone utilizar técnicas de template matching para identificar patrones en imágenes de hígado. Se analizarán y evaluarán algoritmos como SAD, CCC y NCC para encontrar regiones similares a un patrón de referencia. Se aplicarán filtros como bilateral y promedio, y el operador de Sobel para preprocesar las imágenes. Los resultados se validarán con un especialista para apoyar el diagnóstico médico.
Este documento presenta una contribución a la caracterización del descriptor visual de color del estándar MPEG-7. Explica brevemente los objetivos generales y específicos del proyecto, introduce conceptos clave como MPEG, MPEG-7, descriptores visuales y descriptores de color. También describe herramientas de software y métodos para la caracterización y pruebas de los descriptores de color de MPEG-7.
Stereo vision is related to the estimation of the depth of a scene captured, simultaneously, from different points of view. A fundamental problem in stereo vision is the search of corresponding points. A pair of corresponding points is formed by the projections of a same point in space. Find pairs of corresponding points allows to estimate the depth through of triangulation. Dynamic Programming is a efficient method for the search of pairs of corresponding points. In this paper are used different aspects of approaches which used Dynamic Programming for the search of pairs of corresponding points
Electronic microscopes are tools for capturing multimedia information that provide an alternative solution to several problems. Char coal classification is carried out manually by observing its morphological characteristics. In this process is necessary to analyse at least five hundred particles. As an alternative, the automation requires the use of image processing techniques. The char images acquisition is carried out automatically using an electronic microscope with motorized stage. In this process blur, empty and fragment particles images are captured. Including all these images in the classification process imply an additional effort during the process. In particular, the blur images may produce quantification errors in the quantification of the morphological characteristics. In this article a method, based on gradient magnitude and saturation for automatic identification of blur images and images with little content, is presented as a first step towards automatic classification process. Experimental results shown that the proposed method detects 70% of blur images and 95% of images with little content
La clasificación de carbonizados se realiza, generalmente, de forma manual mediante el análisis de las características morfológicas de al menos 500 partículas. Existen varias propuestas de clasificación semiautomática y automática usando técnicas de procesamiento de imágenes, sin embargo es poca la atención prestada al preprocesamiento de las imágenes. Las imágenes de carbonizados, normalmente empleadas para la clasificación automática, son de alta resolución (1300x1030 píxeles). Adicionalmente, analizar 500 partículas implica procesar al menos 290 imágenes para clasificar una muestra. En este artículo, se analiza el uso del sub-muestreo para reducir la resolución de las imágenes y su impacto sobre la clasificación de los carbonizados. Los resultados experimentales muestran que una reducción en el tamaño de las imágenes, a la mitad reduce hasta en un 69.19% el tiempo de procesamiento y no afecta la clasificación final de la muestra
Char classification process is based on morphological characteristics, such as: number of pores, distribution of pores and all thickness. Approximately, five hundred images have to be analysing in order to classify a char sample. Frequently, these images have high spatial resolution, 1300 x 1030 pixels, and intensity levels are represented using 8 bits. Thus, char image applications require large storage and processing capacity. In this paper, we compare different subsampling and quantisation strategies in order to reduce the spatial resolution and the number of bits used. Compared strategies showed excellent results in reducing spatial resolution and intensity levels, with minimal loss of information or details in processed images
Images are retrieved from a repository using MPEG-7 visual descriptors. The MPEG-7 standard uses XML documents for
storing descriptors of multimedia content. The MPEG-7 standard does not define a model for mapping XML documents into a
database. However, XML documents can be considered as a database. An XML document is self-describing and portable data
collection that has a data structure of a tree or a graph. An XML document collection can be semi-structured and this quality
allows grouping XML documents without a schema that relate them. There are two possible database models: the Native XML
and the Relational. A database model for XML documents is selected based on the purpose of information use and database
requirements. In this paper, both models are described and analysed. A relational database schema is designed for mapping
MPEG-7 visual descriptors into a database
Resource-Oriented Architecture offers advantages over other web-service architectures. It is based on a simple, scalable and highly standardised application-level protocol. Multimedia content is commonly managed using the MPEG-7. The MPEG-7 is a standard for representing audiovisual information that satisfies specific requirements based on syntax, semantic and decoding. Content descriptions under MPEG-7 can be organised and characterized without ambiguity. The MPEG-7 eXperimental Model (XM) includes the best performing tools for MPEG-7 normative and non-normative elements. In this paper, multimedia content is managed using the MPEG-7 eXperimental Model functionalities and provided using web-services technology. RESTful principles are the guidelines for achieving multimedia content storage and retrieval. Quantitative evaluation of the proposed web services has shown that this approach has better performance, in term of retrieval speed and storage space
Multimedia content is extracted automatically using MPEG-7 visual descriptors. The MPEG-7 uses an extended XML standard for defining structural relation between descriptors allowing creation and modification of description schemes. MPEG-7 visual descriptors are numerical representations of features - such as: texture, shape and color - extracted from an image. In this paper, the MPEG-7 is conceived as a set of services for extracting and storing visual descriptors. The MPEg-7 text-annotation tool is used for semantic descriptions. Semantic descriptions are linked to images content and conceived as a service for annotating and storing. A framework using service oriented architecture for mapping semantic descriptions and MPEG-7 visual descriptors into a pure-relational model is proposed.
The camera calibration problem consists in estimating the intrinsic and the extrinsic parameters. This problem can be solved by computing the fundamental matrix. The fundamental matrix can be obtained from a set of corresponding points. However in practice, corresponding points may be inaccurately estimated, falsely matched or badly located, due to occlusion and ambiguity, among others. On the other hand, if the set of corresponding points does not include information on different depth planes, the estimated fundamental matrix may not be able to correctly recover the epipolar geometry. In this paper a method for estimating the fundamental matrix is introduced. The estimation problem is posed as finding a set of corresponding points. Fundamental matrices are estimated using subsets of corresponding points and an optimisation criterion is used to select the best estimated fundamental matrix. The experimental evaluation shows that the least range of residuals is a tolerant criterion to large baselines.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
On the Impact of the Error Measure Selection in Evaluating Disparity Maps
1. On the Impact of the Error
Measure Selection in Evaluating
Disparity Maps
Ivan Cabezas, Victor Padilla, Maria Trujillo and
Margaret Florian
ivan.cabezas@correounivalle.edu.co
June 27th 2012
World Automation Congress, ISIAC, Puerto Vallarta - Mexico
2. Multimedia and Vision Laboratory
MMV is a research group of the Universidad del Valle in Cali, Colombia
3D World
Optics
Ivan Victor Maria Margaret Problem
Camera Inverse
System Problem
2D Images
Multimedia and Vision Laboratory Research: http://mmv-lab.univalle.edu.co
On the Impact of the Error Measure Selection in Evaluating Disparity maps, WAC – ISIAC, 2012, Puerto Vallarta, Mexico Slide 2
3. Content
Stereo Vision
Application Domains
The Impact of Inaccurate Disparity Estimation
Quantitative Evaluation
Commonly Used Evaluation Measures
Error Measure Function
Error Measures Purpose and Meaning
Research Problem
Comparative Performance Scenario
Middlebury's Evaluation Model
A* Evaluation Model
Research Questions
Algorithm to Measure the Consistency
Consistency According to Evaluation Models
Conclusions
On the Impact of the Error Measure Selection in Evaluating Disparity maps, WAC – ISIAC, 2012, Puerto Vallarta, Mexico Slide 3
4. Stereo Vision
The stereo vision problem is to recover the 3D structure of a scene
Correspondence
Stereo Images Algorithm
3D Model
Left Right
P
Disparity Map Reconstruction
Algorithm
Points Disparity Values
P L
Z d: P L 0
pl pr p1
p2 1
p3 2
πl πr .
. 3
.
f .
pn
.
.
dmax
Cl B Cr
Yang Q. et al., Stereo Matching with Colour-Weighted Correlation, Hierarchical Belief Propagation, and Occlusion Handling, IEEE PAMI 2009
Scharstein D., and Szeliski R., High-accuracy Stereo Depth Maps using Structured Light, CVPR 2003
On the Impact of the Error Measure Selection in Evaluating Disparity maps, WAC – ISIAC, 2012, Puerto Vallarta, Mexico Slide 4
5. Applications Domains
3D recovering has multiple application domains
Whitehorn M., Vincent T., Debrunner C. and Steele J., Stereo Vision on LHD Automation References, IEEE, Trans on Industry Apps., 2003
Van der Mark W., and Gavrila D., Real-Time Dense Stereo for Intelligent Vehicles, IEEE Trans. On Intelligent Transportation Systems, 2006
Point Grey Research Inc., www.ptgrey.com
On the Impact of the Error Measure Selection in Evaluating Disparity maps, WAC – ISIAC, 2012, Puerto Vallarta, Mexico Slide 5
6. The Impact of Inaccurate Disparity Estimation
Disparity is the distance between corresponding points
Accurate Disparity Estimation Inaccurate Disparity Estimation
P P
P’
Z Z’ Z
pl pr pl pr
πl πr πl πr
pr’
f f
Cl B Cr Cl B Cr
Trucco, E. and Verri A., Introductory Techniques for 3D Computer Vision, Prentice Hall 1998
On the Impact of the Error Measure Selection in Evaluating Disparity maps, WAC – ISIAC, 2012, Puerto Vallarta, Mexico Slide 6
7. Quantitative Evaluation
The use of a methodology allows to:
Assert specific components and procedures
Tune algorithm's parameters
Measure the progress in the field
Szeliski, R., Prediction Error as a Quality Metric for Motion and Stereo, ICCV 2000
Kostliva, J., Cech, J., and Sara, R., Feasibility Boundary in Dense and Semi-Dense Stereo Matching, CVPR 2007
Tomabari, F., Mattoccia, S., and Di Stefano, L., Stereo for robots: Quantitative Evaluation of Efficient and Low-memory Dense Stereo Algorithms, ICCARV 2010
Cabezas, I. and Trujillo M., A Non-Linear Quantitative Evaluation Approach for Disparity Estimation, VISAPP 2011
Cabezas, I. Trujillo M., and Florian M., An Evaluation Methodology for Stereo Correspondence Algorithms, VISAPP 2012
On the Impact of the Error Measure Selection in Evaluating Disparity maps, WAC – ISIAC, 2012, Puerto Vallarta, Mexico Slide 7
8. Commonly Used Evaluation Measures
There are different evaluation measures
Sigma Z Error, SZE
Cabezas, I., Padilla, V., and Trujillo M., A Measure for Accuracy Disparity Maps Evaluation, CIARP 2011
On the Impact of the Error Measure Selection in Evaluating Disparity maps, WAC – ISIAC, 2012, Puerto Vallarta, Mexico Slide 8
9. Error Measure Function
Error Criteria
Test Bed nonocc
Evaluation Measures
Measure nonocc all disc
MAE 0,41 1,48 0,70
MSE 1,48 33,97 4,25
all MRE 0,01 0,03 0,02
BMP 2,90 8,78 7,79
SZE 71,39 341,55 37,86
Estimated Ground-truth
disc
Yang Q. et al., Stereo Matching with Colour-Weighted Correlation, Hierarchical Belief Propagation, and Occlusion Handling, IEEE PAMI 2008
Scharstein D., and Szeliski R., High-accuracy Stereo Depth Maps using Structured Light, CVPR 2003
On the Impact of the Error Measure Selection in Evaluating Disparity maps, WAC – ISIAC, 2012, Puerto Vallarta, Mexico Slide 9
10. Error Measures Purpose and Meaning
In practice, different error measures are used for a same purpose: find a
distance between estimated and ground-truth disparity data
They have different meaning, as well as different properties
On the Impact of the Error Measure Selection in Evaluating Disparity maps, WAC – ISIAC, 2012, Puerto Vallarta, Mexico Slide 10
11. Research Problem
The use of different error measures may produce contradictories score errors
Algorithms
ADCensus RDP
Teddy
Cones
Scharstein, D. and Szeliski, R., A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms, IJCV 2002
Scharstein, D. and Szeliski, R., http://vision.middlebury.edu/stereo/eval/, 2012
On the Impact of the Error Measure Selection in Evaluating Disparity maps, WAC – ISIAC, 2012, Puerto Vallarta, Mexico Slide 11
12. Comparative Performance Scenario
Four stereo image pairs: Tsukuba, Venus, Teddy, Cones
Three error criteria: nonocc, all, disc
112 Stereo Correspondence Algorithms
Two evaluation models: Middlebury and A*
k: a threshold for determining the top-performer
algorithms in the Middlebury's evaluation model
On the Impact of the Error Measure Selection in Evaluating Disparity maps, WAC – ISIAC, 2012, Puerto Vallarta, Mexico Slide 12
13. Middlebury’s Methodology Evaluation Model
… Compute Error Measures Apply Evaluation Model
Algorithm nonocc all disc Algorithm nonocc all disc
ObjectStereo 2.20 6.99 6.36 ObjectStereo 2.20 1 6.99 2 6.36 1
GC+SegmBorder 4.99 5.78 8.66 GC+SegmBorder 4.99 5 5.78 1 8.66 5
PUTv3 2.40 9.11 6.56 PUTv3 2.40 2 9.11 5 6.56 2
PatchMatch 2.47 7.80 7.11 PatchMatch 2.47 3 7.80 3 7.11 3
ImproveSubPix 2.96 8.22 8.55 ImproveSubPix 2.96 4 8.22 4 8.55 4
Middlebury’s
Evaluation Model
Algorithm Average Final
Rank Rank
ObjectStereo 1.33 1
PatchMatch 3.00 2
PUTv3 3.33 3
GC+SegmBorder 3,66 4
ImproveSubPix 4.00 5
Scharstein, D. and Szeliski, R., http://vision.middlebury.edu/stereo/eval/, 2012
On the Impact of the Error Measure Selection in Evaluating Disparity maps, WAC – ISIAC, 2012, Puerto Vallarta, Mexico Slide 13
14. A* Evaluation Model
The A* evaluation model performs a partitioning of the stereo algorithms under
evaluation, based on the Pareto Dominance relation
… Compute Error Measures Apply Evaluation Model
Algorithm nonocc all disc
ObjectStereo 2.20 6.99 6.36
GC+SegmBorder 4.99 5.78 8.66 A* Evaluation Model
PUTv3 2.40 9.11 6.56
PatchMatch 2.47 7.80 7.11
ImproveSubPix 2.96 8.22 8.55
ObjectStereo , GC+SegmBorder
PatchMatch , PUTv3 , ImproveSubPix
Algorithm nonocc all disc Set
ObjectStereo 2.20 6.99 6.36 A*
GC+SegmBorder 4.99 5.78 8.66 A*
PUTv3 2.40 9.11 6.56 A’
PatchMatch 2.47 7.80 7.11 A’
ImproveSubPix 2.96 8.22 8.55 A’
On the Impact of the Error Measure Selection in Evaluating Disparity maps, WAC – ISIAC, 2012, Puerto Vallarta, Mexico Slide 14
15. Research Questions
What is the impact of using an error measure instead of other?
Different evaluation results are obtained using different evaluation measures
Middlebury's Model A* Model
Scharstein, D. and Szeliski, R., A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms, IJCV 2002
Scharstein, D. and Szeliski, R., http://vision.middlebury.edu/stereo/eval/, 2012
On the Impact of the Error Measure Selection in Evaluating Disparity maps, WAC – ISIAC, 2012, Puerto Vallarta, Mexico Slide 15
16. Research Questions (ii)
How does an error measure have to be choose ?
A characterisation of error measures may serve as selection criteria
An error measure:
Automati
AUTOMATIC c is computed without human intervention
Reliable I
RELIABLE has to operate without being influenced by
external factors, and in a deterministic way
Meaningful
MEANINGFUL is intended for a particular purpose, has a
concise interpretation and does not lead to
ambiguous results
Unbiased
UNBIASED is capable of accomplish the measurements for
which is was conceived, and its use allow to
perform impartial comparisons
Consistent
CONSISTENT The scores produced by an error measure should
be compatible with produced scores by another error
measure with a common particular purpose
On the Impact of the Error Measure Selection in Evaluating Disparity maps, WAC – ISIAC, 2012, Puerto Vallarta, Mexico Slide 16
17. Algorithm to Measure the Consistency
Consistency is measured by determining the percentages of agreements
in obtained results by varying the used error measure
On the Impact of the Error Measure Selection in Evaluating Disparity maps, WAC – ISIAC, 2012, Puerto Vallarta, Mexico Slide 17
18. Consistency According to Evaluation Models
The MRE, followed by the MSE error measures shown the highest
percentages of consistency using the Middlebury's model
The SZE, followed by the MRE error measures shown the highest
percentages of consistency using the A* model
Middlebury's Model A* Model
On the Impact of the Error Measure Selection in Evaluating Disparity maps, WAC – ISIAC, 2012, Puerto Vallarta, Mexico Slide 18
19. Conclusions
Using the Middlebury’s evaluation model the MRE and the MSE shown a
high consistency
Using the A* evaluation model the SZE and the MRE shown a high
consistency
The BMP shown a low consistency in both used evaluation models
A characterisation of error measure was presented in order to support the
selection of an error measure
It includes the following attributes: automatic, reliable, meaningful,
unbiased, and consistent
Experimental evaluation was focused on measuring consistency
The selection of an error measure is not a trivial issue since it impacts on
obtained results during a disparity maps evaluation process
On the Impact of the Error Measure Selection in Evaluating Disparity maps, WAC – ISIAC, 2012, Puerto Vallarta, Mexico Slide 19
20. On the Impact of the Error
Measure Selection in Evaluating
Disparity Maps
Ivan Cabezas, Victor Padilla, Maria Trujillo and
Margaret Florian
ivan.cabezas@correounivalle.edu.co
June 27th 2012
World Automation Congress, ISIAC, Puerto Vallarta - Mexico