Short Presentation of [1].
[1] C. Panagiotakis and A. Argyros, Parameter-free Modelling of 2D Shapes with Ellipses, Pattern Recognition, 2015.
For more details, please visit https://sites.google.com/site/costaspanagiotakis/research/EFA
The Hough transform is a feature extraction technique used in image analysis and computer vision to detect shapes and patterns, like lines, circles, and curves. It works by having each edge point in an image vote for a set of possible shape parameters, which are then compiled into a histogram in a parameter space. Local maxima in this space correspond to the most likely shapes in the image. Specifically for line detection, the Hough transform represents a line using its slope (a) and intercept (b), with each edge point voting for all lines passing through it in the parameter space. The algorithm then finds local maxima in this space to detect the lines present in the image. The Hough transform is robust to noise, gaps,
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
The document describes the Marching Cubes algorithm, which was developed in 1987 to construct 3D models from medical imaging data like CT scans. It works by dividing the volume into cubes and using the pixel values at the cube vertices to determine triangles that approximate the surface. There are 256 possible cases but they can be reduced to 14 basic patterns. The algorithm calculates surface normals to improve image quality and has been used to generate 3D models from various medical imaging modalities.
This document presents a self-dependent 3D face rotational alignment algorithm that uses the nose region. It outlines the algorithm steps, which include preprocessing and nose region segmentation, filling the nose region, and minimizing an energy function to find the optimal rotation. The algorithm capitalizes on the nose region's consistency, good localization, and convex features. It is shown to outperform brute force ICP alignment in terms of speed and consistency across different expressions. Future work includes using the alignment for face recognition and improving nose segmentation.
This project developed algorithms to convert between square and hexagonal identifications of surfaces. Python was used to program the data transformations and transfer of curves between representations. The algorithms compare vertex values and track boundaries to determine properties like genus from the Euler characteristic. Examples demonstrate converting a square model into a hexagonal one and vice versa, and transferring curves between representations remains a focus of further work.
The document is an abstract for a thesis that aims to estimate parameters for an ARIMA model of Indonesia's composite stock exchange index (CSEI) using bootstrap methods. It identifies an ARIMA (2,1,0) model for CSEI data from January to June 2011. Parameter estimates were obtained using ordinary least squares and maximum likelihood estimation in Minitab 15. Bootstrap methods reduced the standard error of the OLS model by 39.91% but did not significantly affect the standard error of the MLE model. The number of replications found to minimize standard error was 1000.
The paper compares aspect maps created from the same elevation data at different resolutions (30m and 30,720m) using two different GIS software programs, Idrisi and ArcGIS. It finds that at the higher 30m resolution, the mean difference between the aspect maps (MDA) is 25 degrees, but at the lower 30,720m resolution the MDA decreases to 3 degrees. Therefore, the aspect maps and resulting flow models produced by the two programs will show a larger difference when using high resolution data compared to low resolution data.
This document proposes a method for segmenting breast tumors in ultrasound images based on neutrosophic similarity score and level set (NSSLS). An ultrasound image is represented in the neutrosophic set domain and a neutrosophic similarity score is defined to measure the belongingness to the true tumor. A level set method is then used to segment the tumor from the background using the similarity score values. The proposed method maps pixels into the neutrosophic set domain based on intensity and gradient values to define membership values. Additional criteria like local mean intensity and local homogeneity are also used. The similarity score is calculated between pixels and an ideal object to identify the degree of belongingness under different conditions. Finally, the level
The Hough transform is a feature extraction technique used in image analysis and computer vision to detect shapes and patterns, like lines, circles, and curves. It works by having each edge point in an image vote for a set of possible shape parameters, which are then compiled into a histogram in a parameter space. Local maxima in this space correspond to the most likely shapes in the image. Specifically for line detection, the Hough transform represents a line using its slope (a) and intercept (b), with each edge point voting for all lines passing through it in the parameter space. The algorithm then finds local maxima in this space to detect the lines present in the image. The Hough transform is robust to noise, gaps,
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
The document describes the Marching Cubes algorithm, which was developed in 1987 to construct 3D models from medical imaging data like CT scans. It works by dividing the volume into cubes and using the pixel values at the cube vertices to determine triangles that approximate the surface. There are 256 possible cases but they can be reduced to 14 basic patterns. The algorithm calculates surface normals to improve image quality and has been used to generate 3D models from various medical imaging modalities.
This document presents a self-dependent 3D face rotational alignment algorithm that uses the nose region. It outlines the algorithm steps, which include preprocessing and nose region segmentation, filling the nose region, and minimizing an energy function to find the optimal rotation. The algorithm capitalizes on the nose region's consistency, good localization, and convex features. It is shown to outperform brute force ICP alignment in terms of speed and consistency across different expressions. Future work includes using the alignment for face recognition and improving nose segmentation.
This project developed algorithms to convert between square and hexagonal identifications of surfaces. Python was used to program the data transformations and transfer of curves between representations. The algorithms compare vertex values and track boundaries to determine properties like genus from the Euler characteristic. Examples demonstrate converting a square model into a hexagonal one and vice versa, and transferring curves between representations remains a focus of further work.
The document is an abstract for a thesis that aims to estimate parameters for an ARIMA model of Indonesia's composite stock exchange index (CSEI) using bootstrap methods. It identifies an ARIMA (2,1,0) model for CSEI data from January to June 2011. Parameter estimates were obtained using ordinary least squares and maximum likelihood estimation in Minitab 15. Bootstrap methods reduced the standard error of the OLS model by 39.91% but did not significantly affect the standard error of the MLE model. The number of replications found to minimize standard error was 1000.
The paper compares aspect maps created from the same elevation data at different resolutions (30m and 30,720m) using two different GIS software programs, Idrisi and ArcGIS. It finds that at the higher 30m resolution, the mean difference between the aspect maps (MDA) is 25 degrees, but at the lower 30,720m resolution the MDA decreases to 3 degrees. Therefore, the aspect maps and resulting flow models produced by the two programs will show a larger difference when using high resolution data compared to low resolution data.
This document proposes a method for segmenting breast tumors in ultrasound images based on neutrosophic similarity score and level set (NSSLS). An ultrasound image is represented in the neutrosophic set domain and a neutrosophic similarity score is defined to measure the belongingness to the true tumor. A level set method is then used to segment the tumor from the background using the similarity score values. The proposed method maps pixels into the neutrosophic set domain based on intensity and gradient values to define membership values. Additional criteria like local mean intensity and local homogeneity are also used. The similarity score is calculated between pixels and an ideal object to identify the degree of belongingness under different conditions. Finally, the level
Matching algorithm performance analysis for autocalibration method of stereo ...TELKOMNIKA JOURNAL
Stereo vision is one of the interesting research topics in the computer vision field. Two cameras are used to generate a disparity map, resulting in the depth estimation. Camera calibration is the most important step in stereo vision. The calibration step is used to generate an intrinsic parameter of each camera to get a better disparity map. In general, the calibration process is done manually by using a chessboard pattern, but this process is an exhausting task. Self-calibration is an important ability required to overcome this problem. Self-calibration required a robust and good matching algorithm to find the key feature between images as reference. The purpose of this paper is to analyze the performance of three matching algorithms for the autocalibration process. The matching algorithms used in this research are SIFT, SURF, and ORB. The result shows that SIFT performs better than other methods.
Efficient Top-N Recommendation for Very Large Scale Binary Rated DatasetsFabio Aiolli
The document describes Fabio Aiolli's work on developing an efficient recommendation system for very large datasets with implicit feedback. It discusses using memory-based collaborative filtering with an asymmetric similarity measure and scoring function. It also covers techniques like locality tuning, calibration, and ranking aggregation. These methods achieved the top score on the MSD Challenge competition, outperforming matrix factorization and other approaches on the large music recommendation task.
This document summarizes a study on automatic image matching using area-based correlation. The study developed a procedure to automatically determine conjugate points in epipolar image pairs using template matching. Different template sizes and correlation thresholds were tested. While the method worked well in textured areas, it struggled in homogeneous and shadowed regions with constant gray values. Increasing the template size improved matching but also introduced more geometric distortion. The results provided correlated points and a parallax map but were unable to generate a surface model.
Ilris vs lynx highway surveying and data post processing - munich2008Michael Xinogalos
This document compares highway surveying projects using Optech ILRIS and LYNX laser scanning systems. The ILRIS project involved static scanning over 80 km which took 120 days, while the LYNX project used mobile scanning to cover 240 km in just 1 day. Both systems were able to meet accuracy requirements of 2-3 cm horizontally and 1-2 cm vertically. The LYNX data provided better quality, uniform point clouds while ILRIS allowed for higher resolution of close objects when the scanner was lifted. Overall, the LYNX system provided much higher productivity for highway surveying projects due to its faster, mobile data collection.
Markless registration for scans of free form objectsArtemis Valanis
This document proposes a markerless registration method for registering partial scans of free-form objects. It describes a constrained acquisition process where scans are taken with a small amount of overlap by rotating the scan head vertically or horizontally. The method samples the unknown rotation angle space to approximate the relative transformation between scans. It evaluates the median distance between overlapping points to find the best alignment. The algorithm is validated on scan data and shown to initialize ICP registration accurately without markers or features.
Structural survey, inspection and evaluation using LS technologyMichael Xinogalos
A presentation by Michael Xinogalos, Surveying Engineer NTUA 1988, Technical Director of ASTROLABE ENGINEERING at Innovative LIDAR Solutions Conference - Toronto 2009 (www.astrolabe.gr - www.laseraction.eu)
The document discusses using laser scanning technology to model highways, roads, and pavements for various applications. It describes three projects where laser scanning was used: 1) surveying an existing highway in Greece for reconstruction, scanning from a lifted device to capture horizontal features; 2) evaluating tarmac deformations at an airport, requiring high accuracy and resolution; 3) evaluating safety improvements at an intersection through 3D modeling and simulations. While laser scanning provides high accuracy and detail, it has disadvantages for long linear objects like highways due to lower productivity and need for significant post-processing. Mobile scanning systems may provide more efficient alternatives.
Design and Simulation of a Modified Architecture of Carry Save AdderCSCJournals
This document summarizes a research paper that presents a modified architecture for a carry-save adder. The architecture performs binary addition using a series of XOR, AND, and shift-left operations. A behavioral model was developed in MATLAB to analyze all possible addition combinations for operands up to 15 bits. The model found that the number of shift operations varies from 0 to the number of bits. A mathematical model was derived to predict the average number of shifts for standard operand sizes like 32, 64, or 128 bits. 4-bit synchronous and asynchronous prototypes were designed in Quartus II and simulated to validate the modified adder architecture.
Surveillance System (Minimum Vertex Cover Problem) Saksham Saxena
This document describes a surveillance system application that finds the minimum number of CCTV cameras needed to monitor a building or area. The application takes input of a building's layout and connectivity between locations. It uses a modified Alom algorithm to find the minimum vertex cover of this graph, representing the optimal camera placements. The output is a table showing camera locations and numbers, along with the total installation cost. The application aims to accurately and cost-effectively determine surveillance needs compared to human estimates.
1) The document proposes using an embedded stereo camera and fusing optical flow and SIFT feature matching algorithms to estimate the localization of a micro aerial vehicle (MAV) in GPS-denied environments.
2) An Extended Kalman Filter is used to estimate the MAV's translational velocity and altitude from optical flow measurements separated into rotational and translational components using IMU data.
3) Initial experiments fusing optical flow and SIFT matching for altitude estimation showed promising results compared to ground truth, with room for improvement through onboard processing and successive frame SIFT matching for horizontal position estimation.
The document discusses branding concepts including motif images, word marks, and power driver applications. It presents these concepts across three sections with repetition of the branding elements. Sections include diagnosis, concept, and naming with repetition of images, words, and applications. The document provides information on branding strategies and elements in a repetitive structured format.
Change detection in Hyperspectral data.pptgrssieee
The document discusses adapting the IR-MAD change detection method for use with hyperspectral data. It proposes using principal component analysis (PCA) for feature reduction before applying IR-MAD to address the high dimensionality of hyperspectral data. An initial change mask is also used to eliminate strong changes and isolate no-change pixels for analysis. Experiments on Landsat and hyperspectral data demonstrate the effectiveness of the approaches. Current work involves using Markov random fields to incorporate spatial information and generate a final change classification map.
Can graph convolution network learn spatial relations ? azellecourtial
This document discusses using graph convolutional networks to learn spatial relations from geographic data represented as graphs. It presents experiments on (1) detecting alignments between linear geographic features and (2) selecting roads from a generalized network that should be kept or erased at a target map scale. For alignment detection, proximity criteria were used to modify the graph and convolutional networks with attribute features predicted alignments with 72% accuracy. For road selection, features like length and importance were used and the method classified 72% of sections correctly with 73% precision and 83% recall. The document concludes graph convolutional networks show promise for learning spatial relations from graph-represented geographic data.
Algorithmic Techniques for Parametric Model RecoveryCurvSurf
A complete description of algorithmic techniques for automatic feature extraction from point cloud. The orthogonal distance fitting, an art of maximum liklihood estimation, plays the main role. Differential geometry determines the type of object surface.
Using Generic Image Processing Operations to Detect a Calibration GridJan Wedekind
Camera calibration is an important problem in 3D computer vision. The problem of determining the camera parameters has been studied extensively. However the algorithms for determining the required correspondences are either semi-automatic (i.e. they require user interaction) or they involve difficult to implement custom algorithms.
We present a robust algorithm for detecting the corners of a calibration grid and assigning the correct correspondences for calibration . The solution is based on generic image processing operations so that it can be implemented quickly. The algorithm is limited to distortion-free cameras but it could potentially be extended to deal with camera distortion as well. We also present a corner detector based on steerable filters. The corner detector is particularly suited for the problem of detecting the corners of a calibration grid.
- See more at: http://figshare.com/articles/Using_Generic_Image_Processing_Operations_to_Detect_a_Calibration_Grid/696880#sthash.EG8dWyTH.dpuf
This document proposes a new method for corner detection in images using difference chain coding as a measure of curvature. The method involves extracting a one-pixel thick boundary from the image, chain encoding it to determine slope, smoothing the boundary to remove noise, and calculating difference codes to determine points of high curvature change, which indicate corners. Preliminary results show the method is simple, efficient, and performs comparably to standard corner detection techniques like Harris and Yung.
This paper explores the effectiveness of the recently devel- oped surrogate modeling method, the Adaptive Hybrid Functions (AHF), through its application to complex engineered systems design. The AHF is a hybrid surrogate modeling method that seeks to exploit the advantages of each component surrogate. In this paper, the AHF integrates three component surrogate mod- els: (i) the Radial Basis Functions (RBF), (ii) the Extended Ra- dial Basis Functions (E-RBF), and (iii) the Kriging model, by characterizing and evaluating the local measure of accuracy of each model. The AHF is applied to model complex engineer- ing systems and an economic system, namely: (i) wind farm de- sign; (ii) product family design (for universal electric motors); (iii) three-pane window design; and (iv) onshore wind farm cost estimation. We use three differing sampling techniques to inves- tigate their influence on the quality of the resulting surrogates. These sampling techniques are (i) Latin Hypercube Sampling
∗Doctoral Student, Multidisciplinary Design and Optimization Laboratory, Department of Mechanical, Aerospace and Nuclear Engineering, ASME student member.
†Distinguished Professor and Department Chair. Department of Mechanical and Aerospace Engineering, ASME Lifetime Fellow. Corresponding author.
‡Associate Professor, Department of Mechanical Aerospace and Nuclear En- gineering, ASME member (LHS), (ii) Sobol’s quasirandom sequence, and (iii) Hammers- ley Sequence Sampling (HSS). Cross-validation is used to evalu- ate the accuracy of the resulting surrogate models. As expected, the accuracy of the surrogate model was found to improve with increase in the sample size. We also observed that, the Sobol’s and the LHS sampling techniques performed better in the case of high-dimensional problems, whereas the HSS sampling tech- nique performed better in the case of low-dimensional problems. Overall, the AHF method was observed to provide acceptable- to-high accuracy in representing complex design systems.
11.design and modeling of tool trajectory in c0000www.iiste.org call for pape...Alexander Decker
This document summarizes a research paper that proposes a new methodology for generating smooth tool trajectories for machining using spline techniques. The paper imports an IGES neutral file representing a freeform curve designed in CATIA. It then extracts the control points, knot sequences, and weight sequences from the file. These data are used as inputs to a MATLAB program that simulates the tool trajectory and generates cutter contact points. The results demonstrate a smooth tool path generated in C0 continuity as required for machining applications.
Curves play a significant role in CAD modeling, especially for generating wireframe models. There are three main types of computer-aided design models: wireframe, surface, and solid. Wireframe models use only points and curves to represent an object in the simplest form. Curves can be classified as analytical, interpolated, or approximated. Analytical curves have fixed mathematical equations, interpolated curves pass through given data points in a fixed form, and approximated curves provide the most flexibility in complex shape creation. Parametric equations are preferred over non-parametric equations for representing curves in CAD programs. Common analytical curves include lines, circles, ellipses, parabolas, and hyperbolas. Interpolated curves can
This document summarizes a master's thesis on using ranging measurements to aid monocular and stereo visual simultaneous localization and mapping (SLAM). The thesis aims to reduce drift in estimated trajectories by integrating ranging measurements into bundle adjustment. For monocular SLAM, ranging is used to resolve scale ambiguity, while for stereo SLAM it is directly included in the bundle adjustment cost function. Experimental results demonstrate reduced reprojection error through bundle adjustment of 3168 points over 100 frames using a visual-inertial sensor.
GlobalLogic Machine Learning Webinar “Advanced Statistical Methods for Linear...GlobalLogic Ukraine
31 травня відбувся вебінар для ML-спеціалістів - “Advanced Statistical Methods for Linear Regression” від спікера Віталія Мірошниченка! Ця доповідь для тих, хто добре ознайомлений із найпоширенішими моделями даних та підходами у машинному навчанні і хоче розширити знання іншими підходами.
У доповіді ми розглянули:
- Нагадування. Модель лінійної регресії і підгонка параметрів;
- Навчання батчами (великі об’єми вибірок);
- Оптимізація розрахунків у каскаді моделей;
- Модель суміші лінійних регресій;
- Оцінки методом складеного ножа матриць коваріацій.
Про спікера:
Віталій Мірошниченко — Senior ML Software Engineer, GlobalLogic. Має більше 6 років досвіду, який отримав здебільшого на проєктах, пов’язаних із Telecom, Cyber security, Retail. Активний учасник змагань Kaggle, та Аспірант КНУ.
Деталі заходу: https://bit.ly/3HkqhDB
Відкриті ML позиції у GlobalLogic: https://bit.ly/3MPC9yo
Coupling machine learning and synthetic image DIC-based techniques for the ca...vformxsteels
This document summarizes a methodology for using machine learning and digital image correlation (DIC) techniques to calibrate elastoplastic constitutive models. It introduces parameter identification as an inverse problem and reviews existing inverse methodologies like finite element model updating (FEMU), virtual fields methods (VFM), and data-driven machine learning methods. A new DIC-levelling approach is presented that uses synthetic images from finite element simulations to generate a large digital virtual testing (DVT) database for training machine learning models. As a case study, this methodology is applied to calibrate the parameters of a Swift hardening law and Hill'48 yield criterion for a biaxial cruciform test, achieving high predictive performance (
Matching algorithm performance analysis for autocalibration method of stereo ...TELKOMNIKA JOURNAL
Stereo vision is one of the interesting research topics in the computer vision field. Two cameras are used to generate a disparity map, resulting in the depth estimation. Camera calibration is the most important step in stereo vision. The calibration step is used to generate an intrinsic parameter of each camera to get a better disparity map. In general, the calibration process is done manually by using a chessboard pattern, but this process is an exhausting task. Self-calibration is an important ability required to overcome this problem. Self-calibration required a robust and good matching algorithm to find the key feature between images as reference. The purpose of this paper is to analyze the performance of three matching algorithms for the autocalibration process. The matching algorithms used in this research are SIFT, SURF, and ORB. The result shows that SIFT performs better than other methods.
Efficient Top-N Recommendation for Very Large Scale Binary Rated DatasetsFabio Aiolli
The document describes Fabio Aiolli's work on developing an efficient recommendation system for very large datasets with implicit feedback. It discusses using memory-based collaborative filtering with an asymmetric similarity measure and scoring function. It also covers techniques like locality tuning, calibration, and ranking aggregation. These methods achieved the top score on the MSD Challenge competition, outperforming matrix factorization and other approaches on the large music recommendation task.
This document summarizes a study on automatic image matching using area-based correlation. The study developed a procedure to automatically determine conjugate points in epipolar image pairs using template matching. Different template sizes and correlation thresholds were tested. While the method worked well in textured areas, it struggled in homogeneous and shadowed regions with constant gray values. Increasing the template size improved matching but also introduced more geometric distortion. The results provided correlated points and a parallax map but were unable to generate a surface model.
Ilris vs lynx highway surveying and data post processing - munich2008Michael Xinogalos
This document compares highway surveying projects using Optech ILRIS and LYNX laser scanning systems. The ILRIS project involved static scanning over 80 km which took 120 days, while the LYNX project used mobile scanning to cover 240 km in just 1 day. Both systems were able to meet accuracy requirements of 2-3 cm horizontally and 1-2 cm vertically. The LYNX data provided better quality, uniform point clouds while ILRIS allowed for higher resolution of close objects when the scanner was lifted. Overall, the LYNX system provided much higher productivity for highway surveying projects due to its faster, mobile data collection.
Markless registration for scans of free form objectsArtemis Valanis
This document proposes a markerless registration method for registering partial scans of free-form objects. It describes a constrained acquisition process where scans are taken with a small amount of overlap by rotating the scan head vertically or horizontally. The method samples the unknown rotation angle space to approximate the relative transformation between scans. It evaluates the median distance between overlapping points to find the best alignment. The algorithm is validated on scan data and shown to initialize ICP registration accurately without markers or features.
Structural survey, inspection and evaluation using LS technologyMichael Xinogalos
A presentation by Michael Xinogalos, Surveying Engineer NTUA 1988, Technical Director of ASTROLABE ENGINEERING at Innovative LIDAR Solutions Conference - Toronto 2009 (www.astrolabe.gr - www.laseraction.eu)
The document discusses using laser scanning technology to model highways, roads, and pavements for various applications. It describes three projects where laser scanning was used: 1) surveying an existing highway in Greece for reconstruction, scanning from a lifted device to capture horizontal features; 2) evaluating tarmac deformations at an airport, requiring high accuracy and resolution; 3) evaluating safety improvements at an intersection through 3D modeling and simulations. While laser scanning provides high accuracy and detail, it has disadvantages for long linear objects like highways due to lower productivity and need for significant post-processing. Mobile scanning systems may provide more efficient alternatives.
Design and Simulation of a Modified Architecture of Carry Save AdderCSCJournals
This document summarizes a research paper that presents a modified architecture for a carry-save adder. The architecture performs binary addition using a series of XOR, AND, and shift-left operations. A behavioral model was developed in MATLAB to analyze all possible addition combinations for operands up to 15 bits. The model found that the number of shift operations varies from 0 to the number of bits. A mathematical model was derived to predict the average number of shifts for standard operand sizes like 32, 64, or 128 bits. 4-bit synchronous and asynchronous prototypes were designed in Quartus II and simulated to validate the modified adder architecture.
Surveillance System (Minimum Vertex Cover Problem) Saksham Saxena
This document describes a surveillance system application that finds the minimum number of CCTV cameras needed to monitor a building or area. The application takes input of a building's layout and connectivity between locations. It uses a modified Alom algorithm to find the minimum vertex cover of this graph, representing the optimal camera placements. The output is a table showing camera locations and numbers, along with the total installation cost. The application aims to accurately and cost-effectively determine surveillance needs compared to human estimates.
1) The document proposes using an embedded stereo camera and fusing optical flow and SIFT feature matching algorithms to estimate the localization of a micro aerial vehicle (MAV) in GPS-denied environments.
2) An Extended Kalman Filter is used to estimate the MAV's translational velocity and altitude from optical flow measurements separated into rotational and translational components using IMU data.
3) Initial experiments fusing optical flow and SIFT matching for altitude estimation showed promising results compared to ground truth, with room for improvement through onboard processing and successive frame SIFT matching for horizontal position estimation.
The document discusses branding concepts including motif images, word marks, and power driver applications. It presents these concepts across three sections with repetition of the branding elements. Sections include diagnosis, concept, and naming with repetition of images, words, and applications. The document provides information on branding strategies and elements in a repetitive structured format.
Change detection in Hyperspectral data.pptgrssieee
The document discusses adapting the IR-MAD change detection method for use with hyperspectral data. It proposes using principal component analysis (PCA) for feature reduction before applying IR-MAD to address the high dimensionality of hyperspectral data. An initial change mask is also used to eliminate strong changes and isolate no-change pixels for analysis. Experiments on Landsat and hyperspectral data demonstrate the effectiveness of the approaches. Current work involves using Markov random fields to incorporate spatial information and generate a final change classification map.
Can graph convolution network learn spatial relations ? azellecourtial
This document discusses using graph convolutional networks to learn spatial relations from geographic data represented as graphs. It presents experiments on (1) detecting alignments between linear geographic features and (2) selecting roads from a generalized network that should be kept or erased at a target map scale. For alignment detection, proximity criteria were used to modify the graph and convolutional networks with attribute features predicted alignments with 72% accuracy. For road selection, features like length and importance were used and the method classified 72% of sections correctly with 73% precision and 83% recall. The document concludes graph convolutional networks show promise for learning spatial relations from graph-represented geographic data.
Algorithmic Techniques for Parametric Model RecoveryCurvSurf
A complete description of algorithmic techniques for automatic feature extraction from point cloud. The orthogonal distance fitting, an art of maximum liklihood estimation, plays the main role. Differential geometry determines the type of object surface.
Using Generic Image Processing Operations to Detect a Calibration GridJan Wedekind
Camera calibration is an important problem in 3D computer vision. The problem of determining the camera parameters has been studied extensively. However the algorithms for determining the required correspondences are either semi-automatic (i.e. they require user interaction) or they involve difficult to implement custom algorithms.
We present a robust algorithm for detecting the corners of a calibration grid and assigning the correct correspondences for calibration . The solution is based on generic image processing operations so that it can be implemented quickly. The algorithm is limited to distortion-free cameras but it could potentially be extended to deal with camera distortion as well. We also present a corner detector based on steerable filters. The corner detector is particularly suited for the problem of detecting the corners of a calibration grid.
- See more at: http://figshare.com/articles/Using_Generic_Image_Processing_Operations_to_Detect_a_Calibration_Grid/696880#sthash.EG8dWyTH.dpuf
This document proposes a new method for corner detection in images using difference chain coding as a measure of curvature. The method involves extracting a one-pixel thick boundary from the image, chain encoding it to determine slope, smoothing the boundary to remove noise, and calculating difference codes to determine points of high curvature change, which indicate corners. Preliminary results show the method is simple, efficient, and performs comparably to standard corner detection techniques like Harris and Yung.
This paper explores the effectiveness of the recently devel- oped surrogate modeling method, the Adaptive Hybrid Functions (AHF), through its application to complex engineered systems design. The AHF is a hybrid surrogate modeling method that seeks to exploit the advantages of each component surrogate. In this paper, the AHF integrates three component surrogate mod- els: (i) the Radial Basis Functions (RBF), (ii) the Extended Ra- dial Basis Functions (E-RBF), and (iii) the Kriging model, by characterizing and evaluating the local measure of accuracy of each model. The AHF is applied to model complex engineer- ing systems and an economic system, namely: (i) wind farm de- sign; (ii) product family design (for universal electric motors); (iii) three-pane window design; and (iv) onshore wind farm cost estimation. We use three differing sampling techniques to inves- tigate their influence on the quality of the resulting surrogates. These sampling techniques are (i) Latin Hypercube Sampling
∗Doctoral Student, Multidisciplinary Design and Optimization Laboratory, Department of Mechanical, Aerospace and Nuclear Engineering, ASME student member.
†Distinguished Professor and Department Chair. Department of Mechanical and Aerospace Engineering, ASME Lifetime Fellow. Corresponding author.
‡Associate Professor, Department of Mechanical Aerospace and Nuclear En- gineering, ASME member (LHS), (ii) Sobol’s quasirandom sequence, and (iii) Hammers- ley Sequence Sampling (HSS). Cross-validation is used to evalu- ate the accuracy of the resulting surrogate models. As expected, the accuracy of the surrogate model was found to improve with increase in the sample size. We also observed that, the Sobol’s and the LHS sampling techniques performed better in the case of high-dimensional problems, whereas the HSS sampling tech- nique performed better in the case of low-dimensional problems. Overall, the AHF method was observed to provide acceptable- to-high accuracy in representing complex design systems.
11.design and modeling of tool trajectory in c0000www.iiste.org call for pape...Alexander Decker
This document summarizes a research paper that proposes a new methodology for generating smooth tool trajectories for machining using spline techniques. The paper imports an IGES neutral file representing a freeform curve designed in CATIA. It then extracts the control points, knot sequences, and weight sequences from the file. These data are used as inputs to a MATLAB program that simulates the tool trajectory and generates cutter contact points. The results demonstrate a smooth tool path generated in C0 continuity as required for machining applications.
Curves play a significant role in CAD modeling, especially for generating wireframe models. There are three main types of computer-aided design models: wireframe, surface, and solid. Wireframe models use only points and curves to represent an object in the simplest form. Curves can be classified as analytical, interpolated, or approximated. Analytical curves have fixed mathematical equations, interpolated curves pass through given data points in a fixed form, and approximated curves provide the most flexibility in complex shape creation. Parametric equations are preferred over non-parametric equations for representing curves in CAD programs. Common analytical curves include lines, circles, ellipses, parabolas, and hyperbolas. Interpolated curves can
This document summarizes a master's thesis on using ranging measurements to aid monocular and stereo visual simultaneous localization and mapping (SLAM). The thesis aims to reduce drift in estimated trajectories by integrating ranging measurements into bundle adjustment. For monocular SLAM, ranging is used to resolve scale ambiguity, while for stereo SLAM it is directly included in the bundle adjustment cost function. Experimental results demonstrate reduced reprojection error through bundle adjustment of 3168 points over 100 frames using a visual-inertial sensor.
GlobalLogic Machine Learning Webinar “Advanced Statistical Methods for Linear...GlobalLogic Ukraine
31 травня відбувся вебінар для ML-спеціалістів - “Advanced Statistical Methods for Linear Regression” від спікера Віталія Мірошниченка! Ця доповідь для тих, хто добре ознайомлений із найпоширенішими моделями даних та підходами у машинному навчанні і хоче розширити знання іншими підходами.
У доповіді ми розглянули:
- Нагадування. Модель лінійної регресії і підгонка параметрів;
- Навчання батчами (великі об’єми вибірок);
- Оптимізація розрахунків у каскаді моделей;
- Модель суміші лінійних регресій;
- Оцінки методом складеного ножа матриць коваріацій.
Про спікера:
Віталій Мірошниченко — Senior ML Software Engineer, GlobalLogic. Має більше 6 років досвіду, який отримав здебільшого на проєктах, пов’язаних із Telecom, Cyber security, Retail. Активний учасник змагань Kaggle, та Аспірант КНУ.
Деталі заходу: https://bit.ly/3HkqhDB
Відкриті ML позиції у GlobalLogic: https://bit.ly/3MPC9yo
Coupling machine learning and synthetic image DIC-based techniques for the ca...vformxsteels
This document summarizes a methodology for using machine learning and digital image correlation (DIC) techniques to calibrate elastoplastic constitutive models. It introduces parameter identification as an inverse problem and reviews existing inverse methodologies like finite element model updating (FEMU), virtual fields methods (VFM), and data-driven machine learning methods. A new DIC-levelling approach is presented that uses synthetic images from finite element simulations to generate a large digital virtual testing (DVT) database for training machine learning models. As a case study, this methodology is applied to calibrate the parameters of a Swift hardening law and Hill'48 yield criterion for a biaxial cruciform test, achieving high predictive performance (
Coupling machine learning and synthetic image DIC-based techniques for the ca...vformxsteels
This document presents a methodology for calibrating elastoplastic constitutive models using machine learning and digital image correlation (DIC)-based techniques. It discusses inverse problem approaches including finite element model updating, the virtual fields method, and machine learning methods. A new approach is proposed that uses synthetic image generation and DIC "levelling" to create a large database of simulated experiments for training machine learning models. This approach is demonstrated on a biaxial cruciform test, where synthetic images are used to train an XGBoost model to identify material parameters. The model achieves high accuracy in predicting parameters from simulated experimental data.
Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...ZaidHussein6
This document summarizes a research paper that proposes a stereo vision algorithm called the Canny Block Matching Algorithm (CBMA) to estimate distance from stereo images. CBMA uses the Canny edge detector to extract edges from images and block matching with Sum of Absolute Difference (SAD) to determine disparity maps and reduce processing time. The algorithm was tested on stereo image pairs and achieved an error reduction of about 2% and processing time reduction compared to other methods. Interpolation techniques including bilinear, 1st order polynomial and 2nd order polynomial were also evaluated to enhance the output images and further reduce errors.
Image search using similarity measures based on circular sectorscsandit
With growing number of stored image data, image sea
rch and image similarity problem become
more and more important. The answer can be solved b
y Content-Based Image Retrieval
systems. This paper deals with an image search usin
g similarity measures based on circular
sectors method. The method is inspired by human eye
functionality. The main contribution of the
paper is a modified method that increases accuracy
for about 8% in comparison with original
approach. Here proposed method has used HSB colour
model and median function for feature
extraction. The original approach uses RGB colour m
odel with mean function. Implemented
method was validated on 10 image categories where o
verall average precision was 67%
IMAGE SEARCH USING SIMILARITY MEASURES BASED ON CIRCULAR SECTORScscpconf
With growing number of stored image data, image search and image similarity problem become
more and more important. The answer can be solved by Content-Based Image Retrieval
systems. This paper deals with an image search using similarity measures based on circular
sectors method. The method is inspired by human eye functionality. The main contribution of the
paper is a modified method that increases accuracy for about 8% in comparison with original
approach. Here proposed method has used HSB colour model and median function for feature
extraction. The original approach uses RGB colour model with mean function. Implemented
method was validated on 10 image categories where overall average precision was 67%.
The document describes Bayesian model updating research using adaptive Bayesian filters and data-centric approaches. It outlines previous contributions, future research plans, and short-term objectives. The focus is on Bayesian updating with MCMC and TMCMC approaches to more accurately and efficiently update model parameters. Model reduction techniques are proposed in the frequency domain and time domain to address incomplete measured responses. Numerical studies on a shear building model demonstrate that the Bayesian updating algorithm can estimate parameters well when using 45 data sets and hyperparameters of 0.001, 0.001, with a maximum error of 2.5%.
This paper proposes a parameterized model order reduction technique for efficient global sensitivity analysis of coupled coils over a design space. It uses parameterized models of the electromagnetic matrices and Krylov matrices from the original and adjoint systems, derived using interpolation. Numerical results confirm the efficiency and accuracy of the proposed method for sensitivity analysis across the full design parameter space.
This paper presents a parameterized model order reduction technique for efficient global sensitivity analysis of coupled coils. It uses parameterized models of the electromagnetic matrices and Krylov matrices from the original and adjoint systems computed using PEEC. Numerical results on a model of two coupled coils confirm the accuracy and efficiency of the proposed method compared to existing techniques for sensitivity analysis over the entire design space.
This paper proposes a parameterized model order reduction technique for efficient global sensitivity analysis of coupled coils over a design space. It uses parameterized models of the electromagnetic matrices and Krylov matrices from the original and adjoint systems, derived using interpolation. Numerical results confirm the efficiency and accuracy of the proposed method for sensitivity analysis across the entire design space of interest.
A process to improve the accuracy of mk ii fp to cosmic charles symonsIWSM Mensura
This document presents a process to improve the accuracy of converting sizes measured using the MkII Functional Point (FP) method to sizes using the COSMIC method. Statistical analysis of 22 pairs of MkII and COSMIC size measurements showed good correlation but some outliers. A calculation method is proposed using "functional profiling" to group similar systems and determine conversion ratios based on each system's input, process, and output components. Applying this method improved the accuracy of predicted COSMIC sizes compared to a simple statistical conversion formula. The study provides new insights into the design assumptions of the COSMIC method.
Carved visual hulls for image based modelingaftab alam
The document describes a method for 3D reconstruction from images called carved visual hulls. It involves three main steps: (1) identifying rims on the visual hull surface that touch the object, (2) globally optimizing the surface using graph cuts with photoconsistency and rim constraints, and (3) locally refining the surface while enforcing photoconsistency and geometric constraints. The method produces high-quality 3D models but cannot handle overly concave regions. Results on 7 datasets show promising geometric accuracy while balancing computational costs.
Similar to Parameter-free Modelling of 2D Shapes with Ellipses (20)
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdfSelcen Ozturkcan
Ozturkcan, S., Berndt, A., & Angelakis, A. (2024). Mending clothing to support sustainable fashion. Presented at the 31st Annual Conference by the Consortium for International Marketing Research (CIMaR), 10-13 Jun 2024, University of Gävle, Sweden.
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
PPT on Alternate Wetting and Drying presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
Evidence of Jet Activity from the Secondary Black Hole in the OJ 287 Binary S...Sérgio Sacani
Wereport the study of a huge optical intraday flare on 2021 November 12 at 2 a.m. UT in the blazar OJ287. In the binary black hole model, it is associated with an impact of the secondary black hole on the accretion disk of the primary. Our multifrequency observing campaign was set up to search for such a signature of the impact based on a prediction made 8 yr earlier. The first I-band results of the flare have already been reported by Kishore et al. (2024). Here we combine these data with our monitoring in the R-band. There is a big change in the R–I spectral index by 1.0 ±0.1 between the normal background and the flare, suggesting a new component of radiation. The polarization variation during the rise of the flare suggests the same. The limits on the source size place it most reasonably in the jet of the secondary BH. We then ask why we have not seen this phenomenon before. We show that OJ287 was never before observed with sufficient sensitivity on the night when the flare should have happened according to the binary model. We also study the probability that this flare is just an oversized example of intraday variability using the Krakow data set of intense monitoring between 2015 and 2023. We find that the occurrence of a flare of this size and rapidity is unlikely. In machine-readable Tables 1 and 2, we give the full orbit-linked historical light curve of OJ287 as well as the dense monitoring sample of Krakow.
Candidate young stellar objects in the S-cluster: Kinematic analysis of a sub...Sérgio Sacani
Context. The observation of several L-band emission sources in the S cluster has led to a rich discussion of their nature. However, a definitive answer to the classification of the dusty objects requires an explanation for the detection of compact Doppler-shifted Brγ emission. The ionized hydrogen in combination with the observation of mid-infrared L-band continuum emission suggests that most of these sources are embedded in a dusty envelope. These embedded sources are part of the S-cluster, and their relationship to the S-stars is still under debate. To date, the question of the origin of these two populations has been vague, although all explanations favor migration processes for the individual cluster members. Aims. This work revisits the S-cluster and its dusty members orbiting the supermassive black hole SgrA* on bound Keplerian orbits from a kinematic perspective. The aim is to explore the Keplerian parameters for patterns that might imply a nonrandom distribution of the sample. Additionally, various analytical aspects are considered to address the nature of the dusty sources. Methods. Based on the photometric analysis, we estimated the individual H−K and K−L colors for the source sample and compared the results to known cluster members. The classification revealed a noticeable contrast between the S-stars and the dusty sources. To fit the flux-density distribution, we utilized the radiative transfer code HYPERION and implemented a young stellar object Class I model. We obtained the position angle from the Keplerian fit results; additionally, we analyzed the distribution of the inclinations and the longitudes of the ascending node. Results. The colors of the dusty sources suggest a stellar nature consistent with the spectral energy distribution in the near and midinfrared domains. Furthermore, the evaporation timescales of dusty and gaseous clumps in the vicinity of SgrA* are much shorter ( 2yr) than the epochs covered by the observations (≈15yr). In addition to the strong evidence for the stellar classification of the D-sources, we also find a clear disk-like pattern following the arrangements of S-stars proposed in the literature. Furthermore, we find a global intrinsic inclination for all dusty sources of 60 ± 20◦, implying a common formation process. Conclusions. The pattern of the dusty sources manifested in the distribution of the position angles, inclinations, and longitudes of the ascending node strongly suggests two different scenarios: the main-sequence stars and the dusty stellar S-cluster sources share a common formation history or migrated with a similar formation channel in the vicinity of SgrA*. Alternatively, the gravitational influence of SgrA* in combination with a massive perturber, such as a putative intermediate mass black hole in the IRS 13 cluster, forces the dusty objects and S-stars to follow a particular orbital arrangement. Key words. stars: black holes– stars: formation– Galaxy: center– galaxies: star formation
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Parameter-free Modelling of 2D Shapes with Ellipses
1. Parameter-free Modelling of 2D Shapes with Ellipses
Parameter-free Modelling of 2D
Shapes with Ellipses
Costas Panagiotakis1,2
and Antonis Argyros2,3
1
Dept. of Business Administration, TEI of Crete, Greece
2
Institute of Computer Science, FORTH, Crete, Greece
3
Computer Science Department, University of Crete, Greece
WebPage: https://sites.google.com/site/costaspanagiotakis/research/EFA
2. Parameter-free Modelling of 2D Shapes with Ellipses
• Develop a method that approximates a
given 2D shape with an automatically
determined number of ellipses under the
Equal Area constraint.
• Equal Area constraint: The total area
covered by the ellipses has to be equal to
the area of the original shape.
• We want to achieve:
• Automatic selection of the number of
ellipses
• Automatic estimation of the parameters
of the ellipses
• Good balance between model
complexity and shape coverage under
the Equal Area constraint.
Goal
3. Parameter-free Modelling of 2D Shapes with Ellipses
• The first parameter-free method that automatically estimates an
unknown number of ellipses that best fit a given 2D shape under the
Equal Area constraint.
• Novel definition of shape complexity that exploits the shape skeleton.
• Good balance between model complexity and shape coverage.
• Experiments on more than 4,000 2D shapes show the effectiveness of
the proposed methods.
• The proposed solutions agree with human intuition.
Research highlights
4. Parameter-free Modelling of 2D Shapes with Ellipses
• Input: A binary image I that represents a
2D shape of area A.
• Output: A set E of k ellipses Ei so that
the sum of the areas of all ellipses is A.
• Goal: Compute the number k and the
parameters of ellipses Ei, so that the
trade-off between shape coverage and
model complexity is optimised.
• Shape coverage α(E) is the percentage of
the 2D shape points that are under some
of the ellipses in E.
Background
Fig 2. The proposed solution k = 6
with 96.6% shape coverage.
Fig 1. The given binary image
5. Parameter-free Modelling of 2D Shapes with Ellipses
Shape complexity and model selection
• A new shape complexity measure C based on the Medial Axis Transform
(MAT) of the shape.
• Model Selection: The Akaike Information Criterion (AIC) is used to define
a novel, entropy-based shape complexity measure that balances the
model complexity and the model approximation error:
6. Parameter-free Modelling of 2D Shapes with Ellipses
Methods
• In order to minimise the AIC, two variants are proposed and
evaluated:
• (a) AEFA (Augmentative Ellipse Fitting Algorithm): Gradually
increases the number of considered ellipses starting from a single
one.
• (b) DEFA (Decremental Ellipse Fitting Algorithm): decreases the
number of ellipses starting from a large, automatically defined set.
7. Parameter-free Modelling of 2D Shapes with Ellipses
AEFA
(a)-(e): The solutions proposed by AEFA using one to five ellipses. (f) the six circles in SCC
that initialise GMM-EM for k = 6. (g) The solution of AEFA for k = 6. (h) the solution in case
that circles were selected only based on their size, only. (i) the association of pixels to the
final solution of AEFA for k = 6 ellipses. (j) the AIC and BIC criteria for different values of k.
Captions show the estimated values of shape coverage.
8. Parameter-free Modelling of 2D Shapes with Ellipses
DEFA
(a)-(f): The intermediate solutions proposed by DEFA using 11, 8, 7, 6, 5 and 4 ellipses.
Captions show the estimated values of shape coverage . (g) the skeleton of the 2D shape. (h)
the association of pixels to k = 8 ellipses which is the final solution estimated by DEFA. (i) the
AIC and BIC criteria for different values of k.
9. Parameter-free Modelling of 2D Shapes with Ellipses
• MPEG7 (standard): 1400 images
• LEMS (standard): 1462 images
• SISHA (SImple SHApe): 32
images to evaluate scale, shear
and noise effects.
• SISHA-SCALE
• SISHA-SHEAR
• SISHA-NOISE
Datasets
•1st
shape of SISHA-SCALE
•1st
shape of SISHA-SHEAR
Shapes of the SISHA dataset
10. Parameter-free Modelling of 2D Shapes with Ellipses
Quantitative results
• Pr(m/AIC): the percentage of images of the datasets where the method m clearly outperforms the two others
under the AIC.
• Pr(m/α): the percentage of images of the datasets where the method m clearly outperforms the two others
under the coverage α.
11. Parameter-free Modelling of 2D Shapes with Ellipses
AEFA results (qualitative)
Representative success (top) and failure (bottom) examples of AEFA
method. Captions show the estimated values of shape coverage.
12. Parameter-free Modelling of 2D Shapes with Ellipses
DEFA results (qualitative)
Representative success (top) and failure (bottom) examples of DEFA
method. Captions show the estimated values of shape coverage .
13. Parameter-free Modelling of 2D Shapes with Ellipses
• A parameter-free methodology for estimating automatically the
number and the parameters of ellipses under the Equal Area
constraint.
• Experiments on more than 4,000 2D shapes assess the
effectiveness of AEFA and DEFA on a variety of shapes, shape
transformations, noise models and noise contamination levels.
• DEFA slightly outperforms AEFA especially for shapes of middle
and high complexity.
• The solutions proposed by AEFA and DEFA seem to agree with
human intuition.
Summary
14. Parameter-free Modelling of 2D Shapes with Ellipses
• Application of the proposed approach on the problem of
recovering automatically the unknown kinematic structure of an
unmodelled articulated object based on several, temporally
ordered views of it.
• Extensions of DEFA/AEFA towards handling shape primitives
other than ellipses.
Next steps/future work
Acknowledgments:
This work was partially supported by the EU FP7-ICT-2011-9-601165 project WEARHAP.
WebPage: https://sites.google.com/site/costaspanagiotakis/research/EFA
Editor's Notes
This is your introductory slide – use it to introduce yourself and your paper
Provide a summary of your main findings, and the major relevant points. If possible, include an image that illustrates your research.
Explain the importance and relevance of your research, putting it in context.
Provide relevant background information that helps elucidate your findings.
Present your main methods: in a nutshell, what did you do?
Present your main methods: in a nutshell, what did you do?
Present your main methods: in a nutshell, what did you do?
Present your main methods: in a nutshell, what did you do?
Talk through your results – what stood out? What surprised you?
Talk through your results – what stood out? What surprised you?
Continue to describe your results on this second slide if you need more time.
Continue to describe your results on this second slide if you need more time.
Share what you concluded from your findings. Take it back to the “so what?” question – what’s so important about your conclusions?
Talk about next steps and future work. Where do your conclusions lead? If you’ve worked on a medical treatment or product, what’s the next step in its development? Are you working on a related but different project? Here’s the place to point readers to your new research.