This document compares the error bounds of two classes of dominant point detection methods: 1) methods based on reducing a distance metric like maximum deviation or integral square error, and 2) methods based on digital straight segments. For distance-based methods, the error bound is determined by the maximum deviation of pixels from the line segments between dominant points. For digital straight segment methods, the error bound depends on control parameters that define blurred or approximate digital straight segments. The document analyzes specific methods in each class and plots the theoretical error bounds to facilitate understanding and parameter selection for dominant point detection methods.
Hierarchical Approach for Total Variation Digital Image InpaintingIJCSEA Journal
The art of recovering an image from damage in an undetectable form is known as inpainting. The manual work of inpainting is most often a very time consum ing process. Due to digitalization of this technique, it is automatic and faster. In this paper, after the user selects the regions to be reconstructed, the algorithm automatically reconstruct the lost regions with the help of the information surrounding them. The existing methods perform very well when the region to be reconstructed is very small, but fails in proper reconstruction as the area increases. This paper describes a Hierarchical method by which the area to be inpainted is reduced in multiple levels and Total Variation(TV) method is used to inpaint in each level. This algorithm gives better performance when compared with other existing algorithms such as nearest neighbor interpolation, Inpainting through Blurring and Sobolev Inpainting.
A New Method Based on MDA to Enhance the Face Recognition PerformanceCSCJournals
A novel tensor based method is prepared to solve the supervised dimensionality reduction problem. In this paper a multilinear principal component analysis(MPCA) is utilized to reduce the tensor object dimension then a multilinear discriminant analysis(MDA), is applied to find the best subspaces. Because the number of possible subspace dimensions for any kind of tensor objects is extremely high, so testing all of them for finding the best one is not feasible. So this paper also presented a method to solve that problem, The main criterion of algorithm is not similar to Sequential mode truncation(SMT) and full projection is used to initialize the iterative solution and find the best dimension for MDA. This paper is saving the extra times that we should spend to find the best dimension. So the execution time will be decreasing so much. It should be noted that both of the algorithms work with tensor objects with the same order so the structure of the objects has been never broken. Therefore the performance of this method is getting better. The advantage of these algorithms is avoiding the curse of dimensionality and having a better performance in the cases with small sample sizes. Finally, some experiments on ORL and CMPU-PIE databases is provided.
Offline Signiture and Numeral Recognition in Context of ChequeIJERA Editor
Signature is considered as one of the biometrics. Signature Verification System is required in almost all places where it is compulsory to authenticate a person or his/her credentials to proceed further transaction especially when it comes to bank cheques. For this purpose signature verification system must be powerful and accurate. Till date various methods have been used to make signature verification system powerful and accurate. Research here is related to offline signature verification. Shape Contexts have been used to verify whether 2 shapes are similar or not. It has been used for various applications such as digit recognition, 3D Object recognition, trademark retrieval etc. In this paper we present a modified version of shape context for signature verification on bank cheques using K-Nearest Neighbor classifier.
Real interpolation method for transfer function approximation of distributed ...TELKOMNIKA JOURNAL
Distributed parameter system (DPS) presents one of the most complex systems in the control theory. The transfer function of a DPS possibly contents: rational, nonlinear and irrational components. This thing leads that studies of the transfer function of a DPS are difficult in the time domain and frequency domain. In this paper, a systematic approach is proposed for linearizing DPS. This approach is based on the real interpolation method (RIM) to approximate the transfer function of DPS by rational-order transfer function. The results of the numerical examples show that the method is simple, computationally efficient, and flexible.
Performance and analysis of improved unsharp masking algorithm for imageIAEME Publication
This document presents a study on improving an unsharp masking algorithm for image enhancement. It proposes using an exploratory data analysis model that decomposes an image into a model component and a residual component. The proposed algorithm then individually processes these components to increase contrast and sharpness while reducing halo effects and out-of-range issues. It defines new log-ratio operations for a generalized linear system using concepts from vector spaces and Bregman divergence to provide a theoretical basis for the algorithm. Experimental results showed the proposed algorithm enhanced contrast and sharpness better than previous methods.
Fuzzy Region Merging Using Fuzzy Similarity Measurement on Image Segmentation IJECEIAES
Some image’s regions have unbalance information, such as blurred contour, shade, and uneven brightness. Those regions are called as ambiguous regions. Ambiguous region cause problem during region merging process in interactive image segmentation because that region has double information, both as object and background. We proposed a new region merging strategy using fuzzy similarity measurement for image segmentation. The proposed method has four steps; the first step is initial segmentation using mean-shift algorithm. The second step is giving markers manually to indicate the object and background region. The third step is determining the fuzzy region or ambiguous region in the images. The last step is fuzzy region merging using fuzzy similarity measurement. The experimental results demonstrated that the proposed method is able to segment natural images and dental panoramic images successfully with the average value of misclassification error (ME) 1.96% and 5.47%, respectively.
FACE RECOGNITION USING DIFFERENT LOCAL FEATURES WITH DIFFERENT DISTANCE TECHN...IJCSEIT Journal
A face recognition system using different local features with different distance measures is proposed in this
paper. Proposed method is fast and gives accurate detection. Feature vector is based on Eigen values,
Eigen vectors, and diagonal vectors of sub images. Images are partitioned into sub images to detect local
features. Sub partitions are rearranged into vertically and horizontally matrices. Eigen values, Eigenvector
and diagonal vectors are computed for these matrices. Global feature vector is generated for face
recognition. Experiments are performed on benchmark face YALE database. Results indicate that the
proposed method gives better recognition performance in terms of average recognized rate and retrieval
time compared to the existing methods.
Quality Management of Bathymetric Surface ModellingRirin Indahyani
Aim of this writing is making of Bathymetric Surface Modelling from spot depth data and how to improve the quality management from the spot depth data. Output of bathymetric surface model as Digital Surface Model (DSM). Making process of DSM needs the good quality of spot depth data. So, the quality control is needed before to make DSM. The quality of spot depth include quality in horizontal and vertical position. In vertical position, TVU is used and in horizontal position THU is used. Both of them are compared with standar deviation, that value of standar deviation smaller than THU or TVU is good quality. The interpolation contour is needed to make DSM after quality of spot depth data. There are many methods of interpolation, kriging and natural neighbour example. The output in this writing is the good of bathymetric surface model by using natural neighbour interpolation.
Hierarchical Approach for Total Variation Digital Image InpaintingIJCSEA Journal
The art of recovering an image from damage in an undetectable form is known as inpainting. The manual work of inpainting is most often a very time consum ing process. Due to digitalization of this technique, it is automatic and faster. In this paper, after the user selects the regions to be reconstructed, the algorithm automatically reconstruct the lost regions with the help of the information surrounding them. The existing methods perform very well when the region to be reconstructed is very small, but fails in proper reconstruction as the area increases. This paper describes a Hierarchical method by which the area to be inpainted is reduced in multiple levels and Total Variation(TV) method is used to inpaint in each level. This algorithm gives better performance when compared with other existing algorithms such as nearest neighbor interpolation, Inpainting through Blurring and Sobolev Inpainting.
A New Method Based on MDA to Enhance the Face Recognition PerformanceCSCJournals
A novel tensor based method is prepared to solve the supervised dimensionality reduction problem. In this paper a multilinear principal component analysis(MPCA) is utilized to reduce the tensor object dimension then a multilinear discriminant analysis(MDA), is applied to find the best subspaces. Because the number of possible subspace dimensions for any kind of tensor objects is extremely high, so testing all of them for finding the best one is not feasible. So this paper also presented a method to solve that problem, The main criterion of algorithm is not similar to Sequential mode truncation(SMT) and full projection is used to initialize the iterative solution and find the best dimension for MDA. This paper is saving the extra times that we should spend to find the best dimension. So the execution time will be decreasing so much. It should be noted that both of the algorithms work with tensor objects with the same order so the structure of the objects has been never broken. Therefore the performance of this method is getting better. The advantage of these algorithms is avoiding the curse of dimensionality and having a better performance in the cases with small sample sizes. Finally, some experiments on ORL and CMPU-PIE databases is provided.
Offline Signiture and Numeral Recognition in Context of ChequeIJERA Editor
Signature is considered as one of the biometrics. Signature Verification System is required in almost all places where it is compulsory to authenticate a person or his/her credentials to proceed further transaction especially when it comes to bank cheques. For this purpose signature verification system must be powerful and accurate. Till date various methods have been used to make signature verification system powerful and accurate. Research here is related to offline signature verification. Shape Contexts have been used to verify whether 2 shapes are similar or not. It has been used for various applications such as digit recognition, 3D Object recognition, trademark retrieval etc. In this paper we present a modified version of shape context for signature verification on bank cheques using K-Nearest Neighbor classifier.
Real interpolation method for transfer function approximation of distributed ...TELKOMNIKA JOURNAL
Distributed parameter system (DPS) presents one of the most complex systems in the control theory. The transfer function of a DPS possibly contents: rational, nonlinear and irrational components. This thing leads that studies of the transfer function of a DPS are difficult in the time domain and frequency domain. In this paper, a systematic approach is proposed for linearizing DPS. This approach is based on the real interpolation method (RIM) to approximate the transfer function of DPS by rational-order transfer function. The results of the numerical examples show that the method is simple, computationally efficient, and flexible.
Performance and analysis of improved unsharp masking algorithm for imageIAEME Publication
This document presents a study on improving an unsharp masking algorithm for image enhancement. It proposes using an exploratory data analysis model that decomposes an image into a model component and a residual component. The proposed algorithm then individually processes these components to increase contrast and sharpness while reducing halo effects and out-of-range issues. It defines new log-ratio operations for a generalized linear system using concepts from vector spaces and Bregman divergence to provide a theoretical basis for the algorithm. Experimental results showed the proposed algorithm enhanced contrast and sharpness better than previous methods.
Fuzzy Region Merging Using Fuzzy Similarity Measurement on Image Segmentation IJECEIAES
Some image’s regions have unbalance information, such as blurred contour, shade, and uneven brightness. Those regions are called as ambiguous regions. Ambiguous region cause problem during region merging process in interactive image segmentation because that region has double information, both as object and background. We proposed a new region merging strategy using fuzzy similarity measurement for image segmentation. The proposed method has four steps; the first step is initial segmentation using mean-shift algorithm. The second step is giving markers manually to indicate the object and background region. The third step is determining the fuzzy region or ambiguous region in the images. The last step is fuzzy region merging using fuzzy similarity measurement. The experimental results demonstrated that the proposed method is able to segment natural images and dental panoramic images successfully with the average value of misclassification error (ME) 1.96% and 5.47%, respectively.
FACE RECOGNITION USING DIFFERENT LOCAL FEATURES WITH DIFFERENT DISTANCE TECHN...IJCSEIT Journal
A face recognition system using different local features with different distance measures is proposed in this
paper. Proposed method is fast and gives accurate detection. Feature vector is based on Eigen values,
Eigen vectors, and diagonal vectors of sub images. Images are partitioned into sub images to detect local
features. Sub partitions are rearranged into vertically and horizontally matrices. Eigen values, Eigenvector
and diagonal vectors are computed for these matrices. Global feature vector is generated for face
recognition. Experiments are performed on benchmark face YALE database. Results indicate that the
proposed method gives better recognition performance in terms of average recognized rate and retrieval
time compared to the existing methods.
Quality Management of Bathymetric Surface ModellingRirin Indahyani
Aim of this writing is making of Bathymetric Surface Modelling from spot depth data and how to improve the quality management from the spot depth data. Output of bathymetric surface model as Digital Surface Model (DSM). Making process of DSM needs the good quality of spot depth data. So, the quality control is needed before to make DSM. The quality of spot depth include quality in horizontal and vertical position. In vertical position, TVU is used and in horizontal position THU is used. Both of them are compared with standar deviation, that value of standar deviation smaller than THU or TVU is good quality. The interpolation contour is needed to make DSM after quality of spot depth data. There are many methods of interpolation, kriging and natural neighbour example. The output in this writing is the good of bathymetric surface model by using natural neighbour interpolation.
Local Phase Oriented Structure Tensor To Segment Texture Images With Intensit...CSCJournals
This paper proposed the active contour based texture image segmentation scheme using the linear structure tensor and tensor oriented steerable Quadrature filter. Linear Structure tensor (LST) is a popular method for the unsupervised texture image segmentation where LST contains only horizontal and vertical orientation information but lake in other orientation information and also in the image intensity information on which active contour is dependent. Therefore in this paper, LST is modified by adding intensity information from tensor oriented structure tensor to enhance the orientation information. In the proposed model, these phases oriented features are utilized as an external force in the region based active contour model (ACM) to segment the texture images having intensity inhomogeneity and noisy images. To validate the results of the proposed model, quantitative analysis is also shown in terms of accuracy using a Berkeley image database.
In this paper; we introduce a system of automatic recognition of Amazigh characters based on the Random Forest Method in non-constrictive pictures that are stemmed from the terminals Mobile phone. After doing some pretreatments on the picture, the text is segmented into lines and then into characters. In the stage of characteristics extraction, we are representing the input data into the vector of primitives of the zoning types, of diagonal, horizontal, Gabor filters and of the Zernike moment. These characteristics are linked to pixels’ densities and they are extracted on binary pictures. In the classification stage, we examine four classification methods with two different classifiers types namely the Support vector machines (SVM) and the Random Forest method. After some checking tests, the system of learning and recognition which is based on the Random Forest has shown a good performance on a basis of 100 models of pictures.
There exists a plethora of algorithms to perform image segmentation and there are several issues related to
execution time of these algorithms. Image Segmentation is nothing but label relabeling problem under
probability framework. To estimate the label configuration, an iterative optimization scheme is
implemented to alternately carry out the maximum a posteriori (MAP) estimation and the maximum
likelihood (ML) estimations. In this paper this technique is modified in such a way so that it performs
segmentation within stipulated time period. The extensive experiments shows that the results obtained are
comparable with existing algorithms. This algorithm performs faster execution than the existing algorithm
to give automatic segmentation without any human intervention. Its result match image edges very closer to
human perception.
This document presents a new approach for fingerprint matching called the Minutia Cylindrical Code (MCC) approach. It involves extracting minutia points from fingerprint images, then generating a code for each fingerprint based on the local structure and spatial relationships of minutia points within a cylindrical neighborhood. MCC codes make the fingerprints invariant to scale and rotation. The approach is tested on a database of 200 fingerprints and achieves false acceptance ratios between 6-13% and false rejection ratios below 0.12% depending on the threshold used. The MCC approach performs fingerprint matching efficiently while maintaining accuracy even when fingerprints are rotated or scaled.
This document presents a new color image segmentation approach based on overlap wavelet transform (OWT). OWT extracts wavelet features to better separate different patterns in an image. The proposed method also uses morphological operators and 2D histogram clustering for effective segmentation. It is concluded that the proposed OWT method improves segmentation quality, is reliable, fast and computationally less complex than direct histogram clustering. When tested on various color spaces, the proposed segmentation scheme produced better results in RGB color space compared to others. The main advantages are its use of a single parameter and faster speed.
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUEScscpconf
In the first study [1], a combination of K-means, watershed segmentation method, and Difference In Strength (DIS) map were used to perform image segmentation and edge detection
tasks. We obtained an initial segmentation based on K-means clustering technique. Starting from this, we used two techniques; the first is watershed technique with new merging
procedures based on mean intensity value to segment the image regions and to detect their boundaries. The second is edge strength technique to obtain accurate edge maps of our images without using watershed method. In this technique: We solved the problem of undesirable over segmentation results produced by the watershed algorithm, when used directly with raw data images. Also, the edge maps we obtained have no broken lines on entire image. In the 2nd study level set methods are used for the implementation of curve/interface evolution under various forces. In the third study the main idea is to detect regions (objects) boundaries, to isolate and extract individual components from a medical image. This is done using an active contours to detect regions in a given image, based on techniques of curve evolution, Mumford–Shah functional for segmentation and level sets. Once we classified our images into different intensity regions based on Markov Random Field. Then we detect regions whose boundaries are not necessarily defined by gradient by minimize an energy of Mumford–Shah functional forsegmentation, where in the level set formulation, the problem becomes a mean-curvature which will stop on the desired boundary. The stopping term does not depend on the gradient of the image as in the classical active contour. The initial curve of level set can be anywhere in the image, and interior contours are automatically detected. The final image segmentation is one
closed boundary per actual region in the image.
This paper proposes a novel technique for detecting point landmarks in 3D medical images based on phase congruency (PC). A bank of 3D log-Gabor filters is used to compute energy maps from the images. These energy maps are combined to form the PC measure, which is invariant to intensity variations and provides good feature localization. Significant 3D point landmarks are detected by analyzing the eigenvectors of PC moments computed at each point. The method is demonstrated on head and neck images for radiation therapy planning.
Two-dimensional Block of Spatial Convolution Algorithm and SimulationCSCJournals
This paper proposes an algorithm based on sub image-segmentation strategy. The proposed scheme divides a grayscale image into overlapped 6×6 blocks each of which is segmented into four small 3x3 non-overlapped sub-images. A new spatial approach for efficiently computing 2-dimensional linear convolution or cross-correlation between suitable flipped and fixed filter coefficients (sub image for cross-correlation) and corresponding input sub image is presented. Computation of convolution is iterated vertically and horizontally for each of the four input sub-images. The convolution outputs of these four sub-images are processed to be converted from 6×6 arrays to 4×4 arrays so that the core of the original image is reproduced. The present algorithm proposes a simplified processing technique based on a particular arrangement of the input samples, spatial filtering and small sub-images. This results in reducing the computational complexity as compared with other well known FFT-based techniques. This algorithm lends itself for partitioned small sub-images, local image spatial filtering and noise reduction. The effectiveness of the algorithm is demonstrated through some simulation examples.
Farsi character recognition using new hybrid feature extraction methodsijcseit
Identification of visual words and writings has long been one of the most essential and the most attractive
operations in the field of image processing which has been studied since the last few decades and includes
security, traffic control, fields of psychology, medicine, and engineering, etc. Previous techniques in the
field of identification of visual writings are very similar to each other for the most parts of their analysis,
and depending on the needs of the operational field have presented different feature extraction. Changes in
style of writing and font and turns of words and other issues are challenges of characters identifying
activity. In this study, a system of Persian character identification using independent orthogonal moment
that is Zernike Moment and Fourier-Mellin Moment has been used as feature extraction technique. The
values of Zernike Moments as characteristics independent of rotation have been used for classification
issues in the past and each of their real and imaginary components have been neglected individually and
with the phase coefficients, each of them will be changed by rotation. In this study, Zernike and Fourier-
Mellin Moments have been investigated to detect Persian characters in noisy and noise-free images. Also,
an improvement on the k-Nearest Neighbor (k-NN) classifier is proposed for character recognition. Using
the results comparison of the proposed method with current salient methods such as Back Propagation
(BP) and Radial Basis Function (RBF) neural networks in terms of feature extraction in words, it has been
shown that on the Hoda database, the proposed method reaches an acceptable detection rate (96/5%).
A NOVEL PROBABILISTIC BASED IMAGE SEGMENTATION MODEL FOR REALTIME HUMAN ACTIV...sipij
Automatic human activity detection is one of the difficult tasks in image segmentation application due to
variations in size, type, shape and location of objects. In the traditional probabilistic graphical
segmentation models, intra and inter region segments may affect the overall segmentation accuracy. Also,
both directed and undirected graphical models such as Markov model, conditional random field have
limitations towards the human activity prediction and heterogeneous relationships. In this paper, we have
studied and proposed a natural solution for automatic human activity segmentation using the enhanced
probabilistic chain graphical model. This system has three main phases, namely activity pre-processing,
iterative threshold based image enhancement and chain graph segmentation algorithm. Experimental
results show that proposed system efficiently detects the human activities at different levels of the action
datasets.
Image restoration based on morphological operationsijcseit
This document discusses image restoration using morphological operations. It begins with an abstract describing mathematical morphology and its applications to tasks like noise suppression, feature extraction, and image restoration. It then covers 6 morphological operations (erosion, dilation, opening, closing, boundary extraction, and region filling) and provides mathematical definitions and illustrations of their effects. Examples of applying these operations to grayscale images using different structuring element shapes are shown. The document concludes that morphological operations are effective for image restoration by applying dilation and erosion with the same factor to remove noise while retaining object shapes.
This document describes a finite element analysis project involving the development of a finite element code. It summarizes the course content, describes the coding process, presents results from analyzing a plate with a circular hole using different mesh densities, and compares the accuracy of stress predictions across the meshes. Key results include the strategic mesh achieving similar stress prediction accuracy to the densest mesh while using only 1/4 as many elements. The project improved the author's coding and finite element analysis skills.
Colour-Texture Image Segmentation using Hypercomplex Gabor Analysissipij
Texture analysis such as segmentation and classification plays a vital role in computer vision and pattern recognition and is widely applied to many areas such as industrial automation, bio-medical image processing and remote sensing. In this paper, we first extend the well-known Gabor filters to color images using a specific form of hypercomplex numbers known as quaternions. These filters are
constructed as windowed basis functions of the quaternion Fourier transform also known as hypercomplex Fourier transform. Based on this extension this paper presents the use of these new
quaternionic Gabor filters in colour texture image segmentation. Experimental results on two colour texture images are presented. We tested the robustness of this technique for segmentation by adding Gaussian noise to the texture images. Experimental results indicate that the proposed method gives better segmentation results even in the presence of strongest noise.
Template matching is a technique used in computer vision to find sub-images in a target image that match a template image. It involves moving the template over the target image and calculating a measure of similarity at each position. This is computationally expensive. Template matching can be done at the pixel level or on higher-level features and regions. Various measures are used to quantify the similarity or dissimilarity between images during the matching process. Template matching has applications in areas like object detection but faces challenges with noise, occlusions, and variations in scale and rotation.
This document discusses techniques for representing digital circuit partitioning problems using graph representations. It presents three encoding techniques to map graph partitions to the problem domain: 1) a binary string where each bit represents a cell and its partition, 2) a string with two regions to represent vertices and edge crossings, and 3) a string with regions for vertices and edges. The techniques are evaluated in terms of suitability, with the second approach more suitable for dense circuits. Net cut evaluation is also described to analyze partitioning solutions.
This document summarizes a research paper that proposes a novel approach for enhancing digital images using morphological operators. The approach aims to improve contrast in images with poor lighting conditions. It uses two morphological operators based on Weber's law - the first employs blocked analysis while the second uses opening by reconstruction to define a multi-background. The performance of the proposed operators is evaluated on images with various backgrounds and lighting conditions. Key steps include dividing images into blocks, estimating minimum/maximum intensities in each block to determine background criteria, and applying contrast enhancement transformations based on the criteria. Opening by reconstruction is also used to approximate image background without modifying structures. Experimental results demonstrate the approach enhances images with poor lighting.
Review and comparison of tasks scheduling in cloud computingijfcstjournal
Recently, there has been a dramatic increase in the popularity of cloud computing systems that rent
computing resources on-demand, bill on a pay-as-you-go basis, and multiplex many users on the same
physical infrastructure. It is a virtual pool of resources which are provided to users via Internet. It gives
users virtually unlimited pay-per-use computing resources without the burden of managing the underlying
infrastructure. One of the goals is to use the resources efficiently and gain maximum profit. Scheduling is a
critical problem in Cloud computing, because a cloud provider has to serve many users in Cloud
computing system. So scheduling is the major issue in establishing Cloud computing systems. The
scheduling algorithms should order the jobs in a way where balance between improving the performance
and quality of service and at the same time maintaining the efficiency and fairness among the jobs. This
paper introduces and explores some of the methods provided for in cloud computing has been scheduled.
Finally the waiting time and time to implement some of the proposed algorithm is evaluated
Este documento proporciona instrucciones para un proyecto sobre la resolución de conflictos que consta de varias partes: 1) identificar 10 situaciones cotidianas de conflicto, 2) sugerir soluciones para cada situación, 3) crear normas de convivencia, 4) dramatizar una situación de conflicto y su solución, y 5) presentar el punto 4 grabado. Los puntos 1-3 se entregarán en USB y el punto 4 se expondrá y grabará para subir a Edmodo.
Dilli Babu G.B. is a retail operations coordinator with over 5 years of experience in retail store management, customer relationship management, and team supervision. He has a Bachelor's degree in Hotel Management and is proficient in MS Office, ERP systems, and customer service. Currently residing in Dubai, UAE, his career includes positions managing operations at Brother Gas, Mother Dairy, and Domino's Pizza in India, where he oversaw sales reporting, inventory, customer relations and training junior staff. He is seeking new opportunities utilizing his strong communication skills and experience building trust with customers.
O documento é uma monografia sobre as complicações do tratamento cirúrgico da endometriose. A autora realizou uma revisão da literatura sobre o tema, encontrando 40 trabalhos que abordaram principalmente o tratamento cirúrgico da endometriose superficial e profunda. O objetivo do estudo foi avaliar as complicações do tratamento cirúrgico da endometriose, que estão diretamente ligadas à habilidade do cirurgião. A videolaparoscopia é o procedimento padrão para diagnóstico e estadiamento, com riscos de complicações
El documento describe los conceptos básicos de la reproducción humana, las etapas del proceso reproductivo, la esterilidad e infertilidad, sus causas y diagnóstico. Explica los factores que influyen en la fertilidad de hombres y mujeres y los tratamientos de infertilidad, incluyendo tratamientos farmacológicos, quirúrgicos e inseminación artificial.
Local Phase Oriented Structure Tensor To Segment Texture Images With Intensit...CSCJournals
This paper proposed the active contour based texture image segmentation scheme using the linear structure tensor and tensor oriented steerable Quadrature filter. Linear Structure tensor (LST) is a popular method for the unsupervised texture image segmentation where LST contains only horizontal and vertical orientation information but lake in other orientation information and also in the image intensity information on which active contour is dependent. Therefore in this paper, LST is modified by adding intensity information from tensor oriented structure tensor to enhance the orientation information. In the proposed model, these phases oriented features are utilized as an external force in the region based active contour model (ACM) to segment the texture images having intensity inhomogeneity and noisy images. To validate the results of the proposed model, quantitative analysis is also shown in terms of accuracy using a Berkeley image database.
In this paper; we introduce a system of automatic recognition of Amazigh characters based on the Random Forest Method in non-constrictive pictures that are stemmed from the terminals Mobile phone. After doing some pretreatments on the picture, the text is segmented into lines and then into characters. In the stage of characteristics extraction, we are representing the input data into the vector of primitives of the zoning types, of diagonal, horizontal, Gabor filters and of the Zernike moment. These characteristics are linked to pixels’ densities and they are extracted on binary pictures. In the classification stage, we examine four classification methods with two different classifiers types namely the Support vector machines (SVM) and the Random Forest method. After some checking tests, the system of learning and recognition which is based on the Random Forest has shown a good performance on a basis of 100 models of pictures.
There exists a plethora of algorithms to perform image segmentation and there are several issues related to
execution time of these algorithms. Image Segmentation is nothing but label relabeling problem under
probability framework. To estimate the label configuration, an iterative optimization scheme is
implemented to alternately carry out the maximum a posteriori (MAP) estimation and the maximum
likelihood (ML) estimations. In this paper this technique is modified in such a way so that it performs
segmentation within stipulated time period. The extensive experiments shows that the results obtained are
comparable with existing algorithms. This algorithm performs faster execution than the existing algorithm
to give automatic segmentation without any human intervention. Its result match image edges very closer to
human perception.
This document presents a new approach for fingerprint matching called the Minutia Cylindrical Code (MCC) approach. It involves extracting minutia points from fingerprint images, then generating a code for each fingerprint based on the local structure and spatial relationships of minutia points within a cylindrical neighborhood. MCC codes make the fingerprints invariant to scale and rotation. The approach is tested on a database of 200 fingerprints and achieves false acceptance ratios between 6-13% and false rejection ratios below 0.12% depending on the threshold used. The MCC approach performs fingerprint matching efficiently while maintaining accuracy even when fingerprints are rotated or scaled.
This document presents a new color image segmentation approach based on overlap wavelet transform (OWT). OWT extracts wavelet features to better separate different patterns in an image. The proposed method also uses morphological operators and 2D histogram clustering for effective segmentation. It is concluded that the proposed OWT method improves segmentation quality, is reliable, fast and computationally less complex than direct histogram clustering. When tested on various color spaces, the proposed segmentation scheme produced better results in RGB color space compared to others. The main advantages are its use of a single parameter and faster speed.
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUEScscpconf
In the first study [1], a combination of K-means, watershed segmentation method, and Difference In Strength (DIS) map were used to perform image segmentation and edge detection
tasks. We obtained an initial segmentation based on K-means clustering technique. Starting from this, we used two techniques; the first is watershed technique with new merging
procedures based on mean intensity value to segment the image regions and to detect their boundaries. The second is edge strength technique to obtain accurate edge maps of our images without using watershed method. In this technique: We solved the problem of undesirable over segmentation results produced by the watershed algorithm, when used directly with raw data images. Also, the edge maps we obtained have no broken lines on entire image. In the 2nd study level set methods are used for the implementation of curve/interface evolution under various forces. In the third study the main idea is to detect regions (objects) boundaries, to isolate and extract individual components from a medical image. This is done using an active contours to detect regions in a given image, based on techniques of curve evolution, Mumford–Shah functional for segmentation and level sets. Once we classified our images into different intensity regions based on Markov Random Field. Then we detect regions whose boundaries are not necessarily defined by gradient by minimize an energy of Mumford–Shah functional forsegmentation, where in the level set formulation, the problem becomes a mean-curvature which will stop on the desired boundary. The stopping term does not depend on the gradient of the image as in the classical active contour. The initial curve of level set can be anywhere in the image, and interior contours are automatically detected. The final image segmentation is one
closed boundary per actual region in the image.
This paper proposes a novel technique for detecting point landmarks in 3D medical images based on phase congruency (PC). A bank of 3D log-Gabor filters is used to compute energy maps from the images. These energy maps are combined to form the PC measure, which is invariant to intensity variations and provides good feature localization. Significant 3D point landmarks are detected by analyzing the eigenvectors of PC moments computed at each point. The method is demonstrated on head and neck images for radiation therapy planning.
Two-dimensional Block of Spatial Convolution Algorithm and SimulationCSCJournals
This paper proposes an algorithm based on sub image-segmentation strategy. The proposed scheme divides a grayscale image into overlapped 6×6 blocks each of which is segmented into four small 3x3 non-overlapped sub-images. A new spatial approach for efficiently computing 2-dimensional linear convolution or cross-correlation between suitable flipped and fixed filter coefficients (sub image for cross-correlation) and corresponding input sub image is presented. Computation of convolution is iterated vertically and horizontally for each of the four input sub-images. The convolution outputs of these four sub-images are processed to be converted from 6×6 arrays to 4×4 arrays so that the core of the original image is reproduced. The present algorithm proposes a simplified processing technique based on a particular arrangement of the input samples, spatial filtering and small sub-images. This results in reducing the computational complexity as compared with other well known FFT-based techniques. This algorithm lends itself for partitioned small sub-images, local image spatial filtering and noise reduction. The effectiveness of the algorithm is demonstrated through some simulation examples.
Farsi character recognition using new hybrid feature extraction methodsijcseit
Identification of visual words and writings has long been one of the most essential and the most attractive
operations in the field of image processing which has been studied since the last few decades and includes
security, traffic control, fields of psychology, medicine, and engineering, etc. Previous techniques in the
field of identification of visual writings are very similar to each other for the most parts of their analysis,
and depending on the needs of the operational field have presented different feature extraction. Changes in
style of writing and font and turns of words and other issues are challenges of characters identifying
activity. In this study, a system of Persian character identification using independent orthogonal moment
that is Zernike Moment and Fourier-Mellin Moment has been used as feature extraction technique. The
values of Zernike Moments as characteristics independent of rotation have been used for classification
issues in the past and each of their real and imaginary components have been neglected individually and
with the phase coefficients, each of them will be changed by rotation. In this study, Zernike and Fourier-
Mellin Moments have been investigated to detect Persian characters in noisy and noise-free images. Also,
an improvement on the k-Nearest Neighbor (k-NN) classifier is proposed for character recognition. Using
the results comparison of the proposed method with current salient methods such as Back Propagation
(BP) and Radial Basis Function (RBF) neural networks in terms of feature extraction in words, it has been
shown that on the Hoda database, the proposed method reaches an acceptable detection rate (96/5%).
A NOVEL PROBABILISTIC BASED IMAGE SEGMENTATION MODEL FOR REALTIME HUMAN ACTIV...sipij
Automatic human activity detection is one of the difficult tasks in image segmentation application due to
variations in size, type, shape and location of objects. In the traditional probabilistic graphical
segmentation models, intra and inter region segments may affect the overall segmentation accuracy. Also,
both directed and undirected graphical models such as Markov model, conditional random field have
limitations towards the human activity prediction and heterogeneous relationships. In this paper, we have
studied and proposed a natural solution for automatic human activity segmentation using the enhanced
probabilistic chain graphical model. This system has three main phases, namely activity pre-processing,
iterative threshold based image enhancement and chain graph segmentation algorithm. Experimental
results show that proposed system efficiently detects the human activities at different levels of the action
datasets.
Image restoration based on morphological operationsijcseit
This document discusses image restoration using morphological operations. It begins with an abstract describing mathematical morphology and its applications to tasks like noise suppression, feature extraction, and image restoration. It then covers 6 morphological operations (erosion, dilation, opening, closing, boundary extraction, and region filling) and provides mathematical definitions and illustrations of their effects. Examples of applying these operations to grayscale images using different structuring element shapes are shown. The document concludes that morphological operations are effective for image restoration by applying dilation and erosion with the same factor to remove noise while retaining object shapes.
This document describes a finite element analysis project involving the development of a finite element code. It summarizes the course content, describes the coding process, presents results from analyzing a plate with a circular hole using different mesh densities, and compares the accuracy of stress predictions across the meshes. Key results include the strategic mesh achieving similar stress prediction accuracy to the densest mesh while using only 1/4 as many elements. The project improved the author's coding and finite element analysis skills.
Colour-Texture Image Segmentation using Hypercomplex Gabor Analysissipij
Texture analysis such as segmentation and classification plays a vital role in computer vision and pattern recognition and is widely applied to many areas such as industrial automation, bio-medical image processing and remote sensing. In this paper, we first extend the well-known Gabor filters to color images using a specific form of hypercomplex numbers known as quaternions. These filters are
constructed as windowed basis functions of the quaternion Fourier transform also known as hypercomplex Fourier transform. Based on this extension this paper presents the use of these new
quaternionic Gabor filters in colour texture image segmentation. Experimental results on two colour texture images are presented. We tested the robustness of this technique for segmentation by adding Gaussian noise to the texture images. Experimental results indicate that the proposed method gives better segmentation results even in the presence of strongest noise.
Template matching is a technique used in computer vision to find sub-images in a target image that match a template image. It involves moving the template over the target image and calculating a measure of similarity at each position. This is computationally expensive. Template matching can be done at the pixel level or on higher-level features and regions. Various measures are used to quantify the similarity or dissimilarity between images during the matching process. Template matching has applications in areas like object detection but faces challenges with noise, occlusions, and variations in scale and rotation.
This document discusses techniques for representing digital circuit partitioning problems using graph representations. It presents three encoding techniques to map graph partitions to the problem domain: 1) a binary string where each bit represents a cell and its partition, 2) a string with two regions to represent vertices and edge crossings, and 3) a string with regions for vertices and edges. The techniques are evaluated in terms of suitability, with the second approach more suitable for dense circuits. Net cut evaluation is also described to analyze partitioning solutions.
This document summarizes a research paper that proposes a novel approach for enhancing digital images using morphological operators. The approach aims to improve contrast in images with poor lighting conditions. It uses two morphological operators based on Weber's law - the first employs blocked analysis while the second uses opening by reconstruction to define a multi-background. The performance of the proposed operators is evaluated on images with various backgrounds and lighting conditions. Key steps include dividing images into blocks, estimating minimum/maximum intensities in each block to determine background criteria, and applying contrast enhancement transformations based on the criteria. Opening by reconstruction is also used to approximate image background without modifying structures. Experimental results demonstrate the approach enhances images with poor lighting.
Review and comparison of tasks scheduling in cloud computingijfcstjournal
Recently, there has been a dramatic increase in the popularity of cloud computing systems that rent
computing resources on-demand, bill on a pay-as-you-go basis, and multiplex many users on the same
physical infrastructure. It is a virtual pool of resources which are provided to users via Internet. It gives
users virtually unlimited pay-per-use computing resources without the burden of managing the underlying
infrastructure. One of the goals is to use the resources efficiently and gain maximum profit. Scheduling is a
critical problem in Cloud computing, because a cloud provider has to serve many users in Cloud
computing system. So scheduling is the major issue in establishing Cloud computing systems. The
scheduling algorithms should order the jobs in a way where balance between improving the performance
and quality of service and at the same time maintaining the efficiency and fairness among the jobs. This
paper introduces and explores some of the methods provided for in cloud computing has been scheduled.
Finally the waiting time and time to implement some of the proposed algorithm is evaluated
Este documento proporciona instrucciones para un proyecto sobre la resolución de conflictos que consta de varias partes: 1) identificar 10 situaciones cotidianas de conflicto, 2) sugerir soluciones para cada situación, 3) crear normas de convivencia, 4) dramatizar una situación de conflicto y su solución, y 5) presentar el punto 4 grabado. Los puntos 1-3 se entregarán en USB y el punto 4 se expondrá y grabará para subir a Edmodo.
Dilli Babu G.B. is a retail operations coordinator with over 5 years of experience in retail store management, customer relationship management, and team supervision. He has a Bachelor's degree in Hotel Management and is proficient in MS Office, ERP systems, and customer service. Currently residing in Dubai, UAE, his career includes positions managing operations at Brother Gas, Mother Dairy, and Domino's Pizza in India, where he oversaw sales reporting, inventory, customer relations and training junior staff. He is seeking new opportunities utilizing his strong communication skills and experience building trust with customers.
O documento é uma monografia sobre as complicações do tratamento cirúrgico da endometriose. A autora realizou uma revisão da literatura sobre o tema, encontrando 40 trabalhos que abordaram principalmente o tratamento cirúrgico da endometriose superficial e profunda. O objetivo do estudo foi avaliar as complicações do tratamento cirúrgico da endometriose, que estão diretamente ligadas à habilidade do cirurgião. A videolaparoscopia é o procedimento padrão para diagnóstico e estadiamento, com riscos de complicações
El documento describe los conceptos básicos de la reproducción humana, las etapas del proceso reproductivo, la esterilidad e infertilidad, sus causas y diagnóstico. Explica los factores que influyen en la fertilidad de hombres y mujeres y los tratamientos de infertilidad, incluyendo tratamientos farmacológicos, quirúrgicos e inseminación artificial.
Rolling Shutter Manufacturers - VishwasIndustries.comAetiyuel Williams
Vishwas Industries - Manufacturers and Suppliers of Industrial and domestic Motorized Rolling Shutters, Gates and Grills, Doors, Entrance Automation, and more to provide better protection before loss.
Secure your industry by choosing customized options available at VishwasIndustries.com and protect your workplace by increasing the sense of safety.
This document summarizes key concepts about e-learning design from the book E-Learning by Design by William Horton. It defines e-learning as using technology for learning experiences. There are varieties of e-learning like standalone courses, simulations, mobile learning, and social learning. Design involves planning instruction, while development is implementation. Instructional design is the process of planning learning by applying learning principles. Various design perspectives and influences are discussed, along with aligning learning goals, objectives, sequences, and activities to create effective e-learning.
What is fixation?
Fixation in orthopedics is the process by which an injury is rendered immobile. This may be accomplished by internal fixation, or by external fixation.
What is internal fixation?
Internal fixation is an operation in orthopedics that involves the surgical implementation of implants for the purpose of repairing a bone
What is osteosynthesis?
Osteosynthesis is the reduction and internal fixation of a bone fracture with implantable devices that are usually made of metal. It is a surgical procedure with an open or per cutaneous approach to the fractured bone. Osteosynthesis aims to bring the fractured bone ends together and immobilize the fracture site while healing takes place. In a fracture that is rigidly immobilized the fracture heals by the process of intramembranous ossification
INDICATIONS for internal fixation
History of Fracture Treatment and Development Of Modern Osteosynthesis
In the Preantibiotic era, closed reduction of fractures was understandably the rule for most fractures. However, when closed reduction was insufficient, external fixation appliances served to maintain skeletal units in position, frequently without the need for MMF (Maxillo-mandibular fixation) .Following the development of antibiotics, the open treatment of fractures began to be used on a more frequent basis.
Rigid internal fixation (RIF) is “Any form of fixation applied directly to the bones which is strong enough to permit active use of the skeletal structure during the healing phase and also helps in healing”.
Bone fractures have been treated with various conservative techniques for centuries and it was not until the eighteenth century that internal fixation was first documented.
Icart, a French surgeon in Castres, performed ligature fixation with brass wire on a young man with a humeral fracture.
1886, when Hansmann of Hamburg published a technique using retrievable metal bone plates with transcutaneous screws.
Soon after, a Belgian surgeon, Albin Lambotte, improved these techniques and coined the term internal fixation.
Lambotte developed and manufactured a variety of bone plates and screws and much of his armamentarim remained in use until the 1950s.
In the twentieth century, Sherman improved on Lambotte’s designs and created parallel, threaded, finepitched, self-tapping screws. This hardware was made of corrosion-resistant vanadium steel, which was a strength improvement over silver and ivory fixation materials.
BIOLOGY OF BONE AND BONE HEALING
Bone is a complex and ever-evolving connective tissue and serves multiple purposes. Besides being the main constituent of the human skeletal system, bone is highly metabolically active and essential for the regulation of serum electrolytes—namely, calcium and phosphate.
Marrow cavities are filled with hematopoietic elements necessary to manufacture and maintain blood components and regulate the immune system. Bone is comprised
Validation Study of Dimensionality Reduction Impact on Breast Cancer Classifi...ijcsit
A fundamental problem in machine learning is identifying the most representative subset of features from
which we can construct a predictive model for a classification task. This paper aims to present a validation
study of dimensionality reduction effect on the classification accuracy of mammographic images. The
studied dimensionality reduction methods were: locality-preserving projection (LPP), locally linear
embedding (LLE), Isometric Mapping (ISOMAP) and spectral regression (SR). We have achieved high
rates of classifications. In some combinations the classification rate was 100%. But in most of the cases the
classification rate is about 95%. It was also found that the classification rate increases with the size of the
reduced space and the optimal value of space dimension is 60. We proceeded to validate the obtained
results by measuring some validation indices such as: Xie-Beni index, Dun index and Alternative Dun
index. The measurement of these indices confirms that the optimal value of reduced space dimension is
d=60.
SURVEY ON POLYGONAL APPROXIMATION TECHNIQUES FOR DIGITAL PLANAR CURVESZac Darcy
This document summarizes and compares three techniques for polygonal approximation of digital planar curves:
1) Masood's technique which iteratively deletes redundant points and uses a stabilization process to optimize point locations.
2) Carmona's technique which suppresses redundant points using a breakpoint suppression algorithm and threshold.
3) Tanvir's adaptive optimization algorithm which focuses on high curvature points and applies an optimization procedure.
The techniques are evaluated on standard shapes using measures like number of points, compression ratio, error, and weighted error. Masood's technique generally had lower error while Tanvir's often achieved the highest compression.
This document summarizes an international journal article that proposes a two-phase algorithm for face recognition in the frequency domain using discrete cosine transform (DCT) and discrete Fourier transform (DFT). The algorithm works in two phases: the first phase uses Euclidean distance to determine the K nearest neighbor training samples of a test sample. The second phase represents the test sample as a linear combination of the K nearest neighbors and classifies the sample based on which class representation has the smallest deviation from the test sample. Experimental results on FERET and ORL face databases show the two-phase algorithm based on DCT and DFT outperforms other methods like two-phase sparse representation and PCA/LDA in terms of classification accuracy.
The document summarizes research on using the Expectation Maximization (EM) algorithm for LiDAR point cloud classification. It discusses how the EM algorithm works and related work applying it for point cloud classification. The author proposes improvements to the basic EM algorithm by: 1) Splitting the point cloud vertically to reduce computation time, 2) Initializing model parameters, and 3) Using a scheduling parameter to speed convergence. The proposed algorithm is tested on a LiDAR dataset from Vietnam, achieving over 92% accuracy and faster runtime than the original EM algorithm.
LIDAR POINT CLOUD CLASSIFICATION USING EXPECTATION MAXIMIZATION ALGORITHMijnlc
EM algorithm is a common algorithm in data mining techniques. With the idea of using two iterations of E and M, the algorithm creates a model that can assign class labels to data points. In addition, EM not only optimizes the parameters of the model but also can predict device data during the iteration. Therefore, the paper focuses on researching and improving the EM algorithm to suit the LiDAR point cloud classification. Based on the idea of breaking point cloud and using the scheduling parameter for step E to help the algorithm converge faster with a shorter run time. The proposed algorithm is tested with measurement data set in Nghe An province, Vietnam for more than 92% accuracy and has faster runtime than the original EM algorithm.
Method of optimization of the fundamental matrix by technique speeded up rob...IJECEIAES
The purpose of determining the fundamental matrix (F) is to define the epipolar geometry and to relate two 2D images of the same scene or video series to find the 3D scenes. The problem we address in this work is the estimation of the localization error and the processing time. We start by comparing the following feature extraction techniques: Harris, features from accelerated segment test (FAST), scale invariant feature transform (SIFT) and speed-up robust features (SURF) with respect to the number of detected points and correct matches by different changes in images. Then, we merged the best chosen by the objective function, which groups the descriptors by different regions in order to calculate F. Then, we applied the standardized eight-point algorithm which also automatically eliminates the outliers to find the optimal solution F. The test of our optimization approach is applied on the real images with different scene variations. Our simulation results provided good results in terms of accuracy and the computation time of F does not exceed 900 ms, as well as the projection error of maximum 1 pixel, regardless of the modification.
The document discusses improving the accuracy of digital terrain models (DTMs). It compares different algorithms for generating DTMs from point data, including inverse distance weighting, spline interpolation, Voronoi diagrams, Delaunay triangulation, and kriging. The author tests these algorithms on real elevation data from Oradea, Romania. Results show Delaunay triangulation produces the most accurate surface, followed by kriging and Shepard interpolation. Higher point density leads to greater DTM accuracy than the interpolation method used. The quality of DTMs can be improved by using Delaunay triangulation with a dense point distribution.
Development of stereo matching algorithm based on sum of absolute RGB color d...IJECEIAES
This article presents local-based stereo matching algorithm which comprises a devel- opment of an algorithm using block matching and two edge preserving filters in the framework. Fundamentally, the matching process consists of several stages which will produce the disparity or depth map. The problem and most challenging work for matching process is to get an accurate corresponding point between two images. Hence, this article proposes an algorithm for stereo matching using improved Sum of Absolute RGB Differences (SAD), gradient matching and edge preserving filters. It is Bilateral Filter (BF) to surge up the accuracy. The SAD and gradient matching will be implemented at the first stage to get the preliminary corresponding result, then the BF works as an edge-preserving filter to remove the noise from the first stage. The second BF is used at the last stage to improve final disparity map and increase the object boundaries. The experimental analysis and validation are using the Middlebury standard benchmarking evaluation system. Based on the results, the proposed work is capable to increase the accuracy and to preserve the object edges. To make the proposed work more reliable with current available methods, the quantitative measurement has been made to compare with other existing methods and it shows the proposed work in this article perform much better.
This document discusses the optimal synthesis of four-bar linkages to generate a desired path. It describes using the gradient descent optimization algorithm to minimize the sum of squared errors between target precision points and points reached by the coupler link. The key constraints considered are Grashof's criterion, input link angle order, and transmission angle. A computer program implements gradient descent in MATLAB to determine the optimal four-bar linkage dimensions.
Multimodal Biometrics Recognition by Dimensionality Diminution MethodIJERA Editor
Multimodal biometric system utilizes two or more character modalities, e.g., face, ear, and fingerprint,
Signature, plamprint to improve the recognition accuracy of conventional unimodal methods. We propose a new
dimensionality reduction method called Dimension Diminish Projection (DDP) in this paper. DDP can not only
preserve local information by capturing the intra-modal geometry, but also extract between-class relevant
structures for classification effectively. Experimental results show that our proposed method performs better
than other algorithms including PCA, LDA and MFA.
An efficient hardware logarithm generator with modified quasi-symmetrical app...IJECEIAES
This paper presents a low-error, low-area FPGA-based hardware logarithm generator for digital signal processing systems which require high-speed, real time logarithm operations. The proposed logarithm generator employs the modified quasi-symmetrical approach for an efficient hardware implementation. The error analysis and implementation results are also presented and discussed. The achieved results show that the proposed approach can reduce the approximation error and hardware area compared with traditional methods.
The document describes two methods for reducing the order of linear time-invariant systems: Routh approximation and particle swarm optimization (PSO). Routh approximation determines the denominator of the reduced order model using a Routh array, while retaining time moments or Markov parameters to determine the numerator. PSO reduces order by minimizing the integral squared error between responses of the original and reduced models, adjusting numerator and denominator coefficients. The methods are illustrated on examples, with Routh approximation providing stability guarantees when applied to stable systems.
Matching algorithm performance analysis for autocalibration method of stereo ...TELKOMNIKA JOURNAL
Stereo vision is one of the interesting research topics in the computer vision field. Two cameras are used to generate a disparity map, resulting in the depth estimation. Camera calibration is the most important step in stereo vision. The calibration step is used to generate an intrinsic parameter of each camera to get a better disparity map. In general, the calibration process is done manually by using a chessboard pattern, but this process is an exhausting task. Self-calibration is an important ability required to overcome this problem. Self-calibration required a robust and good matching algorithm to find the key feature between images as reference. The purpose of this paper is to analyze the performance of three matching algorithms for the autocalibration process. The matching algorithms used in this research are SIFT, SURF, and ORB. The result shows that SIFT performs better than other methods.
Texture classification of fabric defects using machine learning IJECEIAES
In this paper, a novel algorithm for automatic fabric defect classification was proposed, based on the combination of a texture analysis method and a support vector machine SVM. Three texture methods were used and compared, GLCM, LBP, and LPQ. They were combined with SVM’s classifier. The system has been tested using TILDA database. A comparative study of the performance and the running time of the three methods was carried out. The obtained results are interesting and show that LBP is the best method for recognition and classification and it proves that the SVM is a suitable classifier for such problems. We demonstrate that some defects are easier to classify than others.
Backtracking based integer factorisation, primality testing and square root c...csandit
Breaking a big integer into two factors is a famous problem in the field of Mathematics and
Cryptography for years. Many crypto-systems use such a big number as their key or part of a
key with the assumption - it is too big that the fastest factorisation algorithms running on the
fastest computers would take impractically long period of time to factorise. Hence, many efforts
have been provided to break those crypto-systems by finding two factors of an integer for
decades. In this paper, a new factorisation technique is proposed which is based on the concept
of backtracking. Binary bit by bit operations are performed to find two factors of a given
integer. This proposed solution can be applied in computing square root, primality test, finding
prime factors of integer numbers etc. If the proposed solution is proven to be efficient enough, it
may break the security of many crypto-systems. Implementation and performance comparison of
the technique is kept for future research.
Web image annotation by diffusion maps manifold learning algorithmijfcstjournal
Automatic image annotation is one of the most challenging problems in machine vision areas. The goal of this task is to predict number of keywords automatically for images captured in real data. Many methods are based on visual features in order to calculate similarities between image samples. But the computation cost of these approaches is very high. These methods require many training samples to be stored in memory. To lessen thisburden, a number of techniques have been developed to reduce the number
of features in a dataset. Manifold learning is a popular approach to nonlinear dimensionality reduction. In
this paper, we investigate Diffusion maps manifold learning method for webimage auto-annotation task.Diffusion maps
manifold learning method isused to reduce the dimension of some visual features. Extensive experiments and analysis onNUS-WIDE-LITE web image dataset with
different visual featuresshow how this manifold learning dimensionality reduction method can be applied effectively to image annotation.
Stereo matching based on absolute differences for multiple objects detectionTELKOMNIKA JOURNAL
This article presents a new algorithm for object detection using stereo camera system. The problem to get an accurate object detion using stereo camera is the imprecise of matching process between two scenes with the same viewpoint. Hence, this article aims to reduce the incorrect matching pixel with four stages. This new algorithm is the combination of continuous process of matching cost computation, aggregation, optimization and filtering. The first stage is matching cost computation to acquire preliminary result using an absolute differences method. Then the second stage known as aggregation step uses a guided filter with fixed window support size. After that, the optimization stage uses winner-takes-all (WTA) approach which selects the smallest matching differences value and normalized it to the disparity level. The last stage in the framework uses a bilateral filter. It is effectively further decrease the error on the disparity map which contains information of object detection and locations. The proposed work produces low errors (i.e., 12.11% and 14.01% nonocc and all errors) based on the KITTI dataset and capable to perform much better compared with before the proposed framework and competitive with some newly available methods.
Study on a Hybrid Segmentation Approach for Handwritten Numeral Strings in Fo...inventionjournals
This paper presents a hybrid approach to segment single- or multiple-touching handwritten numeral strings in form document, the core of which is the combined use of foreground, background and recognition analysis. The algorithm first located some feature points on both the foreground and background skeleton images containing connected numeral strings in form document. Possible segmentation paths were then constructed by matching these feature points, with an unexpected benefit of removing useless strokes. Subsequently, all these segmentation paths were validated and ranked by a recognition-based analysis, where a well-trained two-stage classifier was applied to each separated digit image to obtain its reliability. Finally, by introducing a locally optimal strategy to accelerate the recognition process, the top ranked segmentation path survived to help make a decision on whether to accept or not. Experimental results show that the proposed method can achieve a correct segmentation rate of 96.2 percent on a large dataset collected by our own.
This document compares the accuracy of determining volumes using close range photogrammetry versus traditional methods. It presents a case study where the volume of a test field was calculated using both approaches. Using traditional methods with 425 control points, the volume was calculated as 221475.14 m3 using trapezoidal rules, 221424.52 m3 using Simpson's rule, and 221484.05 m3 using Simpson's 3/8 rule. Using close range photogrammetry with 42 control points and 574 generated points, the volume was calculated as 215310.60 m3 using trapezoidal rules, 215300.43 m3 using Simpson's rule, and 215304.12 m3 using Simpson's 3
In this paper a novel edge detection method has been proposed which outperform Otsu method [1]. The proposed detection algorithm has been devised using the concept of genetic algorithm in spatial domain. The key of edge detection is the choice of threshold; which determines the results of edge detection. GA has been used to determine an optimal threshold over the image. Results are compared with existing Otsu technique which shows better performances
Similar to Assessing Error Bound For Dominant Point Detection (20)
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Assessing Error Bound For Dominant Point Detection
1. Dilip K. Prasad
International Journal of Image Processing (IJIP), Volume (6) : Issue (5) : 2012 326
Assessing Error Bound For Dominant Point Detection
Dilip K. Prasad dilipprasad@gmail.com
School of Computer Engineering
Nanyang Technological University
Singapore, 639798
Abstract
This paper compares the error bounds of two classes of dominant point detection methods, viz.
methods based on reducing a distance metric and methods based on digital straight segments.
We highlight using two methods in each class that the error bounds within the same class may
vary depending upon the fine details and control parameters of the methods but the essential
error bound can be determined analytically for each class. The assessment presented in this
paper will enable the users of dominant point detection methods to understand the nature of error
in the method of their choice and help them to make better decision about the methods and their
control parameters.
Keywords: Dominant point detection, digital straight segment, error bound, comparison, discrete
geometry.
1 : INTRODUCTION
Dominant point detection in digital curves is a preliminary but important step in various image
processing applications like shape extraction, object detection, etc. [1-13]. Despite being a very
old problem of interest, this problem attracts significant attention even today in the research
community. Some of the recent PA methods are proposed by Masood [14, 15], Carmona-Poyato
[16], Wu [17], Kolesnikov [18, 19], Chung [20], Ngyuen [21], and Bhowmick [22] while few older
ones are found in [23-33]. These algorithms can be generally classified based upon the approach
taken by them. Often, algorithmic approach is used for classification. For example, some used
dynamic programming [18, 19, 23], while others used splitting [27-29], merging [24], tree search
[21, 22, 34], suppression of break points [14-16, 35, 36], etc.. However, this is not the focus of the
current work.
The focus of the current work is the basic discrete geometry approach used in the methods since
the geometric concept used in the method determines the achievable accuracy or the inherent
error bound of the dominant point detection methods. In the sense of geometric approach, there
are three major categories – methods based on minimization of a distance metric (like maximum
deviation, integral square error, etc.) [14-20, 27-33, 37], based on the concept of digital straight
segments [21, 22, 38], or based on curvature and convexity [17, 20, 25, 26, 30-32]. We highlight
that the methods based on curvature and convexity often use k-cosine term for studying the
convexity and curvature changes and the decisive factor in the selection of dominant points is
often based on some distance metric or another. Thus, the error analysis of methods based on
curvature and convexity is considered redundant with the error analysis of methods based on
distance metric presented in this paper.
The outline of the paper is as follows. The error bound for the methods based on distance metrics
is presented in section 2. The error bound for the methods based on digital straight segments is
presented in section 3. The paper is concluded in section 4.
2 : METHODS BASED ON MINIMIZING THE MAXIMUM DEVIATION
Two most famous and classic methods from this class of dominant point detection methods are
considered here, Ramer-Douglas-Peucker method [28, 29] (RDP) and Lowe’s method [27].
2. Dilip K. Prasad
International Journal of Image Processing (IJIP), Volume (6) : Issue (5) : 2012 327
These methods are classical methods which use splitting the digital curve based on maximum
deviation. They have laid foundation for dominant point detection in terms of the algorithm, the
idea of using distance measures, and the support region of the dominant points. Thus, they have
been the basis of several later works on dominant point detection methods. Notwithstanding the
later work, their computational efficiency and effectiveness in representing the digital curves has
ensured the popularity and use of these methods in several image processing applications.
These methods and their error metrics are discussed in the subsequent sub-sections.
2.1 Ramer-Douglas-Peucker method
Ramer-Douglas Peucker (RDP) method was first proposed in [28, 29]. The method is briefly
described as follows. Consider a digital curve 1 2 NS P P P , where iP is the i th edge
pixel in the digital curve e . By default, the start and end points 1, NP P are included in the list of
dominant points. If the digital curve is a close loop, then only one of them is included. For the
straight line joining the points 1, NP P , the deviation id of a pixel ( , )i i iP x y S is computed.
Accordingly, the pixel with maximum deviation (MD) can be found. Let it be denoted as maxP .
Then considering the pairs 1 max,P P and max , NP P , two new pixels of maximum deviations are
found from S . This process is repeated until the condition in inequality eq. (1) is satisfied by all
the line segments. The algorithm is terminated when the condition in eq. (1) is satisfied.
tolmax( )id d , (1)
where told is the chosen threshold and its value is typically a few pixels.
Thus, the error bound of the RDP method is determined by the theoretical value of the maximum
deviation max( )id of the pixels from the polygon formed by the dominant points or the control
parameter told . In general, researchers heuristically choose the value of told in the range 1,2 .
It is notable that the maximum deviation is used as optimization goal or termination condition in
several methods [14, 15, 20, 36, 37]. Further, several other methods use the integral square error
(ISE) as the optimization goal or termination condition [18-20, 32, 33]. Since the maximum value
of the integral square error is upper bounded by
2
max( )iN d where N is the number of pixels,
thus max( )id serves as the indicator of the upper bound for these methods as well.
2.2 Lowe’s method
While the algorithmic structure of the Lowe’s method proposed in [27] is essentially similar to
RDP, its termination condition is different from RDP and more useful than using only the
maximum deviation. Lowe considers two distances, first the maximum deviation max( )id of the
pixels in the digital curve spanned by two consecutive dominant points (just like RDP method),
second the distance between the two dominant points (say s ). He defines the significance ratio:
max( )ir s d , (2)
and uses it as the basis for the decision to retain a dominant point. For three consecutive
dominant points 1jDP , jDP , and 1jDP , if 1, , 1 1, 1max ,j j j j j jr r r , then the dominant point
jDP is retained. This is done for all the dominant points except the start and end points. Then the
points with 4jr are also deleted.
It is notable that Lowe assumes that the maximum deviation is always at least 2 pixels. It was
shown in [37, 39] that the maximum deviation is less than 2 pixels in most cases. Further, it is
3. Dilip K. Prasad
International Journal of Image Processing (IJIP), Volume (6) : Issue (5) : 2012 328
evident that the error bound of Lowe’s method is determined by the inverse of the significance
ratio max( )id s .
It is also notable that such similar significance ratios were used by several subsequent
researchers as well [11, 16, 17, 20, 31-35], sometimes as a constraint and other times as the
decision determinant. Most notably, such a ratio has served as the criterion of the support region
of the dominant points.
2.3 Comparison of the methods based on error bound of the maximum deviation
It was shown in [37, 39, 40] that if a continuous line segment is digitized, the maximum distance
of the pixels of the digital line from the continuous line segment is given by:
1 2
max max max
1
max tan sin cos 1d s t t
s
(3)
where s is the length of the continuous line segment, tan is the slope of the line segment and
maxt is given by:
max
1
cos sint
s
(4)
While the error bound maxd directly applies to the methods based on maximum deviation and
integral square error, such as discussed in section 2.1, the ratio maxd s applies to the methods
based on significance ratio such as discussed in section 2.2. The expression for this is given by:
1 2max
max max max
1
max tan sin cos 1
d
t t
s s
(5)
(a) plot of maxd (b) plot of max
Figure 1: Illustration of the error bounds of the distance based methods.
These two errors bounds are plotted in Figure 1. Figure 1(a) shows the theoretical error bound for
RDP method. Based on the analysis, it is seen that the typical range of tol 1,2d used by most
researchers incorporates the theoretical error bound. This is also the basis of the non-parametric
framework presented in [37]. Figure 1(a) also indicates that Lowe’s assumption that the maximum
deviation is at least two pixels is incorrect and the maximum deviation is typically less than two
pixels. Figure 1(b) shows the theoretical error bound for Lowe’s method. It is seen that the
maximum value of maxd s is close to 0.5, which indicates that the constraint on the value of the
4. Dilip K. Prasad
International Journal of Image Processing (IJIP), Volume (6) : Issue (5) : 2012 329
significance ratio can be 2r , as opposed to Lowe’s criterion 4r . Similar analysis can be
made regarding the error bounds of other distance based methods also using eqn. (3).
3 : METHODS BASED ON DIGITAL STRAIGHT SEGMENTS
The concept of the digital straight segments (DSS) is a mathematical concept of discrete
geometry [41] which specifically discusses a continuous line segment and a digital line segment
and its properties. Evidently, it should serve as an important concept for dominant point detection
methods. However, the simplicity and effectiveness of already popular distance based methods
and the mathematical rigor of the concepts of DSS have restricted the researchers’ interest in
using DSS for dominant point detection. Nevertheless, DSS based methods are a very important
class of dominant point detection methods, especially for extremely noisy digital curves for which
the distance based metrics force over-fitting of the dominant points. Here, we consider two recent
methods based on DSS, Nguyen’s method [21] and Bhowmick’s method [22].
3.1 Nguyen’s method of blurred digital straight segments
Nguyen [21] uses the concept of maximally blurred segments for determining the dominant points
on a noisy digital curve. The concept of the blurred segments in turn is based upon the concept of
DSS which is presented here briefly. A digital curve is called a digital straight segment
( )
, , , ; , , ,D a b a b if the points on the digital curve satisfy the equation below:
ax by (6)
The digital straight segment is called maximal if a b and blurred segment of width v if
1 max( , )a b v [21]. Thus, the error bound of Nguyen’s method is determined by the value
of the control parameter v . The value of v used in [21] varies from 0.7 for noiseless digital curves
to 9 for noisy digital curves.
3.2 Bhowmick’s method of approximate digital straight segments
Bhowmick’s method [22] is based on approximate digital straight segments (ADSS), which is also
based upon the concept of DSS. However, as compared to several usual works on DSS,
Bhowmick uses the properties of the DSS derived from the Freeman chain code in [41]. Out of
the four properties of DSS (R1-R4 in [22]), only two (R1 and R2) are used for defining ADSS and
two additional conditions (c1 and c2 in [22]) are imposed on the digital curve to be concluded as
ADSS. It is highlighted that the isothetic error bound of the ADSS was presented in [22] but is
reconsidered for comparison with other methods. According to [22], the isothetic error bound is
given by:
1
1
d
p
(7)
where
1
2
p
d and p is the minimum intermediate run-lengths of the freeman chain code of
the digital curve (see [22] for details). Thus, the maximum value of the isothetic error is 1.5.
However, the error bound of the polygonal approximation in [22] is the product of both the
isothetic error and the control parameter selected heuristically. Thus, the net error bound of
Bhowmick’s method is given by 1.5 . The value of in [22] varies from 1 to 14.
3.3 Comparison of the methods based on digital straight segments
The methods based on DSS can be compared with each other directly based on the maximum
isothetic (vertical/maximum distance of the pixels from the continuous line segment) distance of
the digital curve from the line segments formed by dominant points. Thus, from the control
parameters’ values and the error bounds of Nguyen’s and Bhowmick’s methods, it can be seen
5. Dilip K. Prasad
International Journal of Image Processing (IJIP), Volume (6) : Issue (5) : 2012 330
that the maximum values of the upper error bounds of these methods are 9 and 21 respectively,
while the minimum are 0.7 and 1.5 respectively.
If the DSS based methods should be compared with distance based methods, the maximum
perpendicular distance of the pixels from the digital curves should be determined for DSS based
methods also. Such distance can be easily computed by taking the projection of the maximum
isothetic distance on the direction normal to the line segments. If the isothetic distance is denoted
by isod , then the desired distance for comparison is computed as iso cosd where tan is the
slope of the line segment joining the dominant points. It is notable that in both [21] and [22], it is
assumed that 0 a b , which implies that tan 0,1 .
Accordingly, the upper bound of the maximum deviation of Nguyen’s method (for width
0.7,9v ) varies as shown in Figure 2(a) and Bhowmick’s method (for 1,14 ) varies as shown
in Figure 2(b). From both the upper bounds, it is seen that DSS based methods allow for a large
value of maximum deviation, which is especially suitable for extremely noisy curves. As a trade-
off, the quality of fitting in the DSS based methods is severely dependent upon the choice of the
control parameters.
(a) Maximum deviation for Nguyen’s method for
0.7,9v
(b) Maximum deviation for Bhowmick’s method for
1,14
Figure 2: Illustration of the error bounds of the maximum deviations of Nguyen’s and Bhowmick’s methods.
It is also worth considering the error bound when both the blurred segments of Nguyen [21] and
the ADSS of Bhowmick [22] are both forced to be the maximal straight segments (which is a well-
defined control parameter independent) quantity. In such situation, considering the equation of a
line in eqn. (8):
ax by c (8)
where a and b correspond to a maximal digital line segment , ,D a b while the points ( , )P x y
belong to the continuous two-dimensional space. For the pixels ,P x y belonging to the digital
straight segment , ,a b , if they are to satisfy eqn. (8), then c has to satisfy inequality (9):
c a b (9)
Using the above, the perpendicular distance (deviation) of the pixels in the maximal DSS from the
line given in eqn. (8) satisfies eqn. (10) below:
6. Dilip K. Prasad
International Journal of Image Processing (IJIP), Volume (6) : Issue (5) : 2012 331
0 sin cosid (10)
where 1
tan a b . This error bound DSS sin cosd is plotted in Figure 3. It can be
seen that the maximum deviation for the DSS based methods (assuming no blurring or
approximation of DSS) is about 2 pixels.
Figure 3: Illustration of the error bounds of the methods based on digital straight segments.
4 : CONCLUSION
The error bounds of various methods falling into two categories of dominant point detection
methods are assessed and compared in this paper. It is shown that the analytical bound on the
maximum deviation can be computed for both distance based and DSS based dominant point
detection methods. It is observed in each analysis that the error bound depends upon the
orientation of the line segment and the control parameter of the algorithm. The assessment also
gives clues on assessing the error bound of other methods as well. Thus, this work shall help
researchers on studying the effect of the control parameters and the error bounds of the dominant
point detection methods. A well understood choice of dominant point detection method shall in
turn result into better performance for their higher level applications as well [1-13, 42, 43].
REFERENCES
[1] S. Lavallee and R. Szeliski, "Recovering the position and orientation of free-form objects
from image contours using 3D distance maps," IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 17, pp. 378-390, 1995.
[2] D. K. Prasad and M. K. H. Leung, "A hybrid approach for ellipse detection in real images,"
in 2nd International Conference on Digital Image Processing, Singapore, 2010, pp. 75460I-
6.
[3] J. H. Elder and R. M. Goldberg, "Image editing in the contour domain," IEEE Transactions
on Pattern Analysis and Machine Intelligence, vol. 23, pp. 291-296, 2001.
[4] D. K. Prasad and M. K. H. Leung, "Reliability/Precision Uncertainty in Shape Fitting
Problems," in IEEE International Conference on Image Processing, Hong Kong, 2010, pp.
4277-4280.
[5] D. Brunner and P. Soille, "Iterative area filtering of multichannel images," Image and Vision
Computing, vol. 25, pp. 1352-1364, 2007.
[6] D. K. Prasad and M. K. H. Leung, "Error analysis of geometric ellipse detection methods
due to quantization," in Fourth Pacific-Rim Symposium on Image and Video Technology
(PSIVT 2010), Singapore, 2010, pp. 58 - 63.
[7] D. K. Prasad and M. K. H. Leung, "An ellipse detection method for real images," in 25th
International Conference of Image and Vision Computing New Zealand (IVCNZ 2010),
Queenstown, New Zealand, 2010, pp. 1-8.
[8] R. Yang and Z. Zhang, "Eye gaze correction with stereovision for video-teleconferencing,"
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, pp. 956-960,
2004.
7. Dilip K. Prasad
International Journal of Image Processing (IJIP), Volume (6) : Issue (5) : 2012 332
[9] D. K. Prasad and M. K. H. Leung, "Methods for ellipse detection from edge maps of real
images," in Machine Vision - Applications and Systems, F. Solari, M. Chessa, and S.
Sabatini, Eds., ed: InTech, 2012, pp. 135-162.
[10] D. K. Prasad, C. Quek, and M. K. H. Leung, "Fast segmentation of sub-cellular organelles,"
International Journal of Image Processing, vol. 6, 2012.
[11] A. Kolesnikov and P. Fränti, "Data reduction of large vector graphics," Pattern Recognition,
vol. 38, pp. 381-394, 2005.
[12] F. Mokhtarian and A. Mackworth, "Scale-based description and recognition of planar
curves and two-dimensional shapes," IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. PAMI-8, pp. 34-43, 1986.
[13] D. K. Prasad, M. K. H. Leung, and S. Y. Cho, "Edge curvature and convexity based ellipse
detection method," Pattern Recognition, vol. 45, pp. 3204-3221, 2012.
[14] A. Masood, "Dominant point detection by reverse polygonization of digital curves," Image
and Vision Computing, vol. 26, pp. 702-715, 2008.
[15] A. Masood and S. A. Haq, "A novel approach to polygonal approximation of digital curves,"
Journal of Visual Communication and Image Representation, vol. 18, pp. 264-274, 2007.
[16] A. Carmona-Poyato, F. J. Madrid-Cuevas, R. Medina-Carnicer, and R. Muñoz-Salinas,
"Polygonal approximation of digital planar curves through break point suppression," Pattern
Recognition, vol. 43, pp. 14-25, 2010.
[17] W. Y. Wu, "An adaptive method for detecting dominant points," Pattern Recognition, vol.
36, pp. 2231-2237, 2003.
[18] A. Kolesnikov and P. Fränti, "Reduced-search dynamic programming for approximation of
polygonal curves," Pattern Recognition Letters, vol. 24, pp. 2243-2254, 2003.
[19] A. Kolesnikov and P. Fränti, "Polygonal approximation of closed discrete curves," Pattern
Recognition, vol. 40, pp. 1282-1293, 2007.
[20] K. L. Chung, P. H. Liao, and J. M. Chang, "Novel efficient two-pass algorithm for closed
polygonal approximation based on LISE and curvature constraint criteria," Journal of Visual
Communication and Image Representation, vol. 19, pp. 219-230, 2008.
[21] T. P. Nguyen and I. Debled-Rennesson, "A discrete geometry approach for dominant point
detection," Pattern Recognition, vol. 44, pp. 32-44, 2011.
[22] P. Bhowmick and B. B. Bhattacharya, "Fast polygonal approximation of digital curves using
relaxed straightness properties," IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 29, pp. 1590-1602, 2007.
[23] J. C. Perez and E. Vidal, "Optimum polygonal approximation of digitized curves," Pattern
Recognition Letters, vol. 15, pp. 743-750, 1994.
[24] L. J. Latecki and R. Lakämper, "Convexity Rule for Shape Decomposition Based on
Discrete Contour Evolution," Computer Vision and Image Understanding, vol. 73, pp. 441-
454, 1999.
[25] B. K. Ray and K. S. Ray, "An algorithm for detection of dominant points and polygonal
approximation of digitized curves," Pattern Recognition Letters, vol. 13, pp. 849-856, 1992.
[26] P. V. Sankar and C. U. Sharma, "A parallel procedure for the detection of dominant points
on a digital curve," Computer Graphics and Image Processing, vol. 7, pp. 403-412, 1978.
[27] D. G. Lowe, "Three-dimensional object recognition from single two-dimensional images,"
Artificial Intelligence, vol. 31, pp. 355-395, 1987.
[28] U. Ramer, "An iterative procedure for the polygonal approximation of plane curves,"
Computer Graphics and Image Processing, vol. 1, pp. 244-256, 1972.
[29] D. H. Douglas and T. K. Peucker, "Algorithms for the reduction of the number of points
required to represent a digitized line or its caricature," Cartographica: The International
Journal for Geographic Information and Geovisualization, vol. 10, pp. 112-122, 1973.
[30] C.-H. Teh and R. T. Chin, "On the detection of dominant points on digital curves," IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 11, pp. 859-872, 1989.
[31] N. Ansari and K. W. Huang, "Non-parametric dominant point detection," Pattern
Recognition, vol. 24, pp. 849-862, 1991.
[32] T. M. Cronin, "A boundary concavity code to support dominant point detection," Pattern
Recognition Letters, vol. 20, pp. 617-634, 1999.
8. Dilip K. Prasad
International Journal of Image Processing (IJIP), Volume (6) : Issue (5) : 2012 333
[33] M. Salotti, "Optimal polygonal approximation of digitized curves using the sum of square
deviations criterion," Pattern Recognition, vol. 35, pp. 435-443, 2002.
[34] B. Sarkar, S. Roy, and D. Sarkar, "Hierarchical representation of digitized curves through
dominant point detection," Pattern Recognition Letters, vol. 24, pp. 2869-2882, 2003.
[35] M. Marji and P. Siy, "Polygonal representation of digital planar curves through dominant
point detection - A nonparametric algorithm," Pattern Recognition, vol. 37, pp. 2113-2130,
2004.
[36] D. K. Prasad, C. Quek, and M. K. H. Leung, "A non-heuristic dominant point detection
based on suppression of break points," in Image Analysis and Recognition. vol. 7324, A.
Campilho and M. Kamel, Eds., ed Aveiro, Portugal: Springer Berlin Heidelberg, 2012, pp.
269-276.
[37] D. K. Prasad, M. K. H. Leung, C. Quek, and S.-Y. Cho, "A novel framework for making
dominant point detection methods non-parametric," Image and Vision Computing, 2012.
[38] G. Damiand and D. Coeurjolly, "A generic and parallel algorithm for 2D digital curve
polygonal approximation," Journal of Real-Time Image Processing, vol. 6, pp. 145-157,
2011.
[39] D. K. Prasad, C. Quek, M. K. H. Leung, and S. Y. Cho, "A parameter independent line
fitting method," in Asian Conference on Pattern Recognition (ACPR), Beijing, China, 2011,
pp. 441-445.
[40] D. K. Prasad and M. K. H. Leung, "Polygonal representation of digital curves," in Digital
Image Processing, S. G. Stanciu, Ed., ed: InTech, 2012, pp. 71-90.
[41] A. Rosenfeld, "Digital straight line segments," IEEE Transactions on Computers, vol. C-23,
pp. 1264-1269, 1974.
[42] D. K. Prasad, "Adaptive traffic signal control system with cloud computing based online
learning," in 8th International Conference on Information, Communications, and Signal
Processing (ICICS 2011), Singapore, 2011.
[43] D. K. Prasad, R. K. Gupta, and M. K. H. Leung, "An Error Bounded Tangent Estimator for
Digitized Elliptic Curves," in Discrete Geometry for Computer Imagery. vol. 6607, ed:
Springer Berlin / Heidelberg, 2011, pp. 272-283.