Crest lines convey the inherent features of the shape. Mathematically Crest lines are described via extremes of the surface principal curvatures along their corresponding lines of curvature. In this study we used an automatic threshold estimation technique to estimate crest lines. We firstly computed the principal curvature and corresponding direction for each vertex in the mesh; then we computed the saliency value by a linear combination of the maximal absolute curvature and the absolute curvature difference; finally, we automatically determine the threshold to detect the crest lines according to the saliency value. For illustrative purpose, we demonstrated our method with several examples.
We develop an algorithm to segment a 3D surface mesh intovisually meaningful regions based on analyzing the local geometry ofvertices. We begin with a novel characterization of mesh vertices as convex,concave or hyperbolic depending upon their discrete local geometry.Hyperbolic and concave vertices are considered potential feature regionboundaries. We propose a new region growing technique
starting fromthese boundary vertices leading to a segmentation of the surface thatis subsequently simplified by a region-merging method. Experiments indicatethat our algorithm segments a broad range of 3D models at aquality comparable to existing algorithms. Its use of methods that belongnaturally to discretized surfaces and ease of implementation makeit an appealing alternative in various applications.
3D M ESH S EGMENTATION U SING L OCAL G EOMETRYijcga
We develop an algorithm to segment a 3D surface mesh intovisually meaningful regions based
on
analyzing the local geometry ofvertices. We begin with a novel characterization of mesh vertices as
convex,concave or hyperbolic depending upon their discrete local geometry.Hyperbolic and concave
vertices are considered potential feature regionboundari
es. We propose a new region growing technique
starting fromthese boundary vertices leading to a segmentation of the surface thatis subsequently simplified
by a region
-
merging method. Experiments indicatethat our algorithm segments a broad range of 3D
model
s at aquality comparable to existing algorithms. Its use of methods that belongnaturally to discretized
surfaces and ease of implementation makeit an appealing alternative in various applications
2D Shape Reconstruction Based on Combined Skeleton-Boundary FeaturesCSCJournals
Reconstructing a shape into meaningful representation plays a strong role in shape-related applications. It is motivated by recent studies in visual human perception discussing the importance of certain shape boundary features as well as features of the shape area; it utilizes certain properties of the shape skeleton based on symmetry axes combined with boundary features based on curvature to determine protrusion strength. The main contribution of this paper is the combination of skeleton and boundary information by deploying the symmetry –curvature duality method to simulate human perception based on results of research in visual perception. The experiments directly compare our algorithm with experiments on human subjects. They show that the proposed approach meets the human perceptual intuition. In comparison to existing methods, our method gives a perceptually more reasonable and stable result. Furthermore, the noisy shape reconstruction demonstrates the robustness of our method, experiments of different data sets prove the invariant representation of the combined skeleton-boundary approach.
We develop an algorithm to segment a 3D surface mesh intovisually meaningful regions based on analyzing the local geometry ofvertices. We begin with a novel characterization of mesh vertices as convex,concave or hyperbolic depending upon their discrete local geometry.Hyperbolic and concave vertices are considered potential feature regionboundaries. We propose a new region growing technique
starting fromthese boundary vertices leading to a segmentation of the surface thatis subsequently simplified by a region-merging method. Experiments indicatethat our algorithm segments a broad range of 3D models at aquality comparable to existing algorithms. Its use of methods that belongnaturally to discretized surfaces and ease of implementation makeit an appealing alternative in various applications.
3D M ESH S EGMENTATION U SING L OCAL G EOMETRYijcga
We develop an algorithm to segment a 3D surface mesh intovisually meaningful regions based
on
analyzing the local geometry ofvertices. We begin with a novel characterization of mesh vertices as
convex,concave or hyperbolic depending upon their discrete local geometry.Hyperbolic and concave
vertices are considered potential feature regionboundari
es. We propose a new region growing technique
starting fromthese boundary vertices leading to a segmentation of the surface thatis subsequently simplified
by a region
-
merging method. Experiments indicatethat our algorithm segments a broad range of 3D
model
s at aquality comparable to existing algorithms. Its use of methods that belongnaturally to discretized
surfaces and ease of implementation makeit an appealing alternative in various applications
2D Shape Reconstruction Based on Combined Skeleton-Boundary FeaturesCSCJournals
Reconstructing a shape into meaningful representation plays a strong role in shape-related applications. It is motivated by recent studies in visual human perception discussing the importance of certain shape boundary features as well as features of the shape area; it utilizes certain properties of the shape skeleton based on symmetry axes combined with boundary features based on curvature to determine protrusion strength. The main contribution of this paper is the combination of skeleton and boundary information by deploying the symmetry –curvature duality method to simulate human perception based on results of research in visual perception. The experiments directly compare our algorithm with experiments on human subjects. They show that the proposed approach meets the human perceptual intuition. In comparison to existing methods, our method gives a perceptually more reasonable and stable result. Furthermore, the noisy shape reconstruction demonstrates the robustness of our method, experiments of different data sets prove the invariant representation of the combined skeleton-boundary approach.
Image segmentation Based on Chan-Vese Active Contours using Finite Difference...ijsrd.com
There are a lot of image segmentation techniques that try to differentiate between backgrounds and object pixels but many of them fail to discriminate between different objects that are close to each other, e.g. low contrast between foreground and background regions increase the difficulty for segmenting images. So we introduced the Chan-Vese active contours model for image segmentation to detect the objects in given image, which is built based on techniques of curve evolution and level set method. The Chan-Vese model is a special case of Mumford-Shah functional for segmentation and level sets. It differs from other active contour models in that it is not edge dependent, therefore it is more capable of detecting objects whose boundaries may not be defined by a gradient. Finally, we developed code in Matlab 7.8 for solving resulting Partial differential equation numerically by the finite differences schemes on pixel-by-pixel domain.
A Method to Determine End-Points ofStraight Lines Detected Using the Hough Tr...IJERA Editor
The Hough transform is often used to detect lines in images, yielding the equations of lines found. It works by
transforming a line in a given image to a point in a new transform image while accumulating a measure of the
likelihood that a point in the new image corresponds to a line from the original image. The resulting equation of
a line describes a line of unspecified length, with no information about the end-points of the actual lines in the
image which informed the detection of the line of unspecified length. This paperpresents a method to determine
the end-points of the actual lines in the image.The method tracks points from the original image whose
transforms led to evidence of lines in the transform image. Consecutive points are then grouped into sub-lines
according to whether or not there are enough of them in the group so that they constitute a significant sub-line,
and all points in the group are far enough from any other points along the same line, that those other points
should not be considered part of the same sub-line.Sample results are shown.
Nose Tip Detection Using Shape index and Energy Effective for 3d Face Recogni...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Solving the Pose Ambiguity via a Simple Concentric Circle ConstraintDr. Amarjeet Singh
Estimating the pose of objects with circle feature from images is a basic and important question in computer vision
community. This paper is focused on the ambiguity problem in pose estimation of circle feature, and a new method is proposed based
on the concentric circle constraint. The pose of a single circle feature, in general, can be determined from its projection in the image
plane with a pre-calibrated camera. However, there are generally two possible sets of pose parameters. By introducing the concentric
circle constraint, interference from the false solution can be excluded. On the basis of element at infinity in projective geometry and
the Euclidean distance invariant, cases that concentric circles are coplanar and non-coplanar are discussed respectively. Experiments
on these two cases are performed to validate the proposed method.
Since the skeleton represents the topology structure of the query sketch and 2D views of 3D model, this paper proposes a novel sketch-based 3D model retrieval algorithm which utilizes skeleton characteristics as the features to describe the object shape. Firstly, we propose advanced skeleton strength map (ASSM) algorithm to create the skeleton which computes the skeleton strength map by isotropic diffusion on the gradient vector field, selects critical points from the skeleton strength map and connects them by Kruskal's algorithm. Then, we propose histogram feature comparison algorithm which adopts the radii of the disks at skeleton points and the lengths of skeleton branches to extract the histogram feature, and compare the similarity between two skeletons using the histogram feature matrix of skeleton endpoints. Experiment results demonstrate that our approach which combines these two algorithms significantly outperforms several leading sketch-based retrieval approaches.
Automatic rectification of perspective distortion from a single image using p...ijcsa
Perspective distortion occurs due to the perspective projection of 3D scene on a 2D surface. Correcting the distortion of a single image without losing any desired information is one of the challenging task in the field of Computer Vision. We consider the problem of estimating perspective distortion from a single still image of an unstructured environment and to make perspective correction which is both quantitatively accurate as well as visually pleasing. Corners are detected based on the orientation of the image. A method based on plane homography and transformation is used to make perspective correction. The algorithm infers frontier information directly from the images, without any reference objects or prior knowledge of the camera parameters. The frontiers are detected using geometric context based segmentation. The goal of this paper is to present a framework providing fully automatic and fast perspective correction.
We develop an algorithm to segment a 3D surface mesh intovisually meaningful regions based on analyzing the local geometry ofvertices. We begin with a novel characterization of mesh vertices as convex,concave or hyperbolic depending upon their discrete local geometry.Hyperbolic and concave vertices are considered potential feature regionboundaries. We propose a new region growing technique starting fromthese boundary vertices leading to a segmentation of the surface thatis subsequently simplified by a region-merging method. xperiments indicatethat our algorithm segments a broad range of 3D models at aquality comparable to existing algorithms. Its use of methods that belongnaturally to discretized surfaces and ease of implementation makeit an appealing alternative in various applications.
Image segmentation Based on Chan-Vese Active Contours using Finite Difference...ijsrd.com
There are a lot of image segmentation techniques that try to differentiate between backgrounds and object pixels but many of them fail to discriminate between different objects that are close to each other, e.g. low contrast between foreground and background regions increase the difficulty for segmenting images. So we introduced the Chan-Vese active contours model for image segmentation to detect the objects in given image, which is built based on techniques of curve evolution and level set method. The Chan-Vese model is a special case of Mumford-Shah functional for segmentation and level sets. It differs from other active contour models in that it is not edge dependent, therefore it is more capable of detecting objects whose boundaries may not be defined by a gradient. Finally, we developed code in Matlab 7.8 for solving resulting Partial differential equation numerically by the finite differences schemes on pixel-by-pixel domain.
A Method to Determine End-Points ofStraight Lines Detected Using the Hough Tr...IJERA Editor
The Hough transform is often used to detect lines in images, yielding the equations of lines found. It works by
transforming a line in a given image to a point in a new transform image while accumulating a measure of the
likelihood that a point in the new image corresponds to a line from the original image. The resulting equation of
a line describes a line of unspecified length, with no information about the end-points of the actual lines in the
image which informed the detection of the line of unspecified length. This paperpresents a method to determine
the end-points of the actual lines in the image.The method tracks points from the original image whose
transforms led to evidence of lines in the transform image. Consecutive points are then grouped into sub-lines
according to whether or not there are enough of them in the group so that they constitute a significant sub-line,
and all points in the group are far enough from any other points along the same line, that those other points
should not be considered part of the same sub-line.Sample results are shown.
Nose Tip Detection Using Shape index and Energy Effective for 3d Face Recogni...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Solving the Pose Ambiguity via a Simple Concentric Circle ConstraintDr. Amarjeet Singh
Estimating the pose of objects with circle feature from images is a basic and important question in computer vision
community. This paper is focused on the ambiguity problem in pose estimation of circle feature, and a new method is proposed based
on the concentric circle constraint. The pose of a single circle feature, in general, can be determined from its projection in the image
plane with a pre-calibrated camera. However, there are generally two possible sets of pose parameters. By introducing the concentric
circle constraint, interference from the false solution can be excluded. On the basis of element at infinity in projective geometry and
the Euclidean distance invariant, cases that concentric circles are coplanar and non-coplanar are discussed respectively. Experiments
on these two cases are performed to validate the proposed method.
Since the skeleton represents the topology structure of the query sketch and 2D views of 3D model, this paper proposes a novel sketch-based 3D model retrieval algorithm which utilizes skeleton characteristics as the features to describe the object shape. Firstly, we propose advanced skeleton strength map (ASSM) algorithm to create the skeleton which computes the skeleton strength map by isotropic diffusion on the gradient vector field, selects critical points from the skeleton strength map and connects them by Kruskal's algorithm. Then, we propose histogram feature comparison algorithm which adopts the radii of the disks at skeleton points and the lengths of skeleton branches to extract the histogram feature, and compare the similarity between two skeletons using the histogram feature matrix of skeleton endpoints. Experiment results demonstrate that our approach which combines these two algorithms significantly outperforms several leading sketch-based retrieval approaches.
Automatic rectification of perspective distortion from a single image using p...ijcsa
Perspective distortion occurs due to the perspective projection of 3D scene on a 2D surface. Correcting the distortion of a single image without losing any desired information is one of the challenging task in the field of Computer Vision. We consider the problem of estimating perspective distortion from a single still image of an unstructured environment and to make perspective correction which is both quantitatively accurate as well as visually pleasing. Corners are detected based on the orientation of the image. A method based on plane homography and transformation is used to make perspective correction. The algorithm infers frontier information directly from the images, without any reference objects or prior knowledge of the camera parameters. The frontiers are detected using geometric context based segmentation. The goal of this paper is to present a framework providing fully automatic and fast perspective correction.
We develop an algorithm to segment a 3D surface mesh intovisually meaningful regions based on analyzing the local geometry ofvertices. We begin with a novel characterization of mesh vertices as convex,concave or hyperbolic depending upon their discrete local geometry.Hyperbolic and concave vertices are considered potential feature regionboundaries. We propose a new region growing technique starting fromthese boundary vertices leading to a segmentation of the surface thatis subsequently simplified by a region-merging method. xperiments indicatethat our algorithm segments a broad range of 3D models at aquality comparable to existing algorithms. Its use of methods that belongnaturally to discretized surfaces and ease of implementation makeit an appealing alternative in various applications.
EFFECTIVE INTEREST REGION ESTIMATION MODEL TO REPRESENT CORNERS FOR IMAGE sipij
One of the most important steps to describe local features is to estimate the interest region around the feature location to achieve the invariance against different image transformation. The pixels inside the interest region are used to build the descriptor, to represent a feature. Estimating the interest region
around a corner location is a fundamental step to describe the corner feature. But the process is challenging under different image conditions. Most of the corner detectors derive appropriate scales to estimate the region to build descriptors. In our approach, we have proposed a new local maxima-based
interest region detection method. This region estimation method can be used to build descriptors to represent corners. We have performed a comparative analysis to match the feature points using recent corner detectors and the result shows that our method achieves better precision and recall results than
existing methods.
Analytic parametric equations of log-aesthetic curves in terms of incomplete ...Rushan Ziatdinov
Rushan Ziatdinov, Norimasa Yoshida, Tae-wan Kim, 2012. Analytic parametric equations of log-aesthetic curves in terms of incomplete gamma functions, Computer Aided Geometric Design 29(2), 129-140.
Log-aesthetic curves (LACs) have recently been developed to meet the requirements of industrial design for visually pleasing shapes. LACs are defined in terms of definite integrals, and adaptive Gaussian quadrature can be used to obtain curve segments. To date, these integrals have only been evaluated analytically for restricted values (0, 1, 2) of the shape parameter. We present parametric equations expressed in terms of incomplete gamma functions, which allow us to find an exact analytic representation of a curve segment for any real value of alpha. The computation time for generating a LAC segment using the incomplete gamma functions is up to 13 times faster than using direct numerical integration. Our equations are generalizations of the well-known Cornu, Nielsen, and logarithmic spirals, and involutes of a circle.
GRAPH PARTITIONING FOR IMAGE SEGMENTATION USING ISOPERIMETRIC APPROACH: A REVIEWDrm Kapoor
Graph cut is fast method performing a binary segmentation. Graph cuts proved to be a useful multidimensional optimization tool which can enforce piecewise smoothness while preserving relevant sharp discontinuities. This paper is mainly intended as an application of isoperimetric algorithm of graph theory for image segmentation and analysis of different parameters used in the algorithm like generating weights, regulates the execution, Connectivity Parameter, cutoff, number of recursions. We present some basic background information on graph cuts and discuss major theoretical results, which helped to reveal both strengths and limitations of this surprisingly versatile combinatorial algorithm.
Method of Fracture Surface Matching Based on Mathematical StatisticsIJRESJOURNAL
ABSTRACT: Fracture surface matching is an important part of point cloud registration. In this paper, a method of fracture surface matching based on mathematical statistics is proposed. We reconstruct a coordinate system of the fractured surface points, and analyze the characteristics of the point cloud in the new coordinate system, using the theory of mathematical statistcs. The general distribution of the points is determined. The method can realize the matching relation among some point cloud.
ROLE OF HYBRID LEVEL SET IN FETAL CONTOUR EXTRACTIONsipij
Image processing technologies may be employed for quicker and accurate diagnosis in analysis and
feature extraction of medical images. Here, existing level set algorithm is modified and it is employed for
extracting contour of fetus in an image. In traditional approach, fetal parameters are extracted manually
from ultrasound images. An automatic technique is highly desirable to obtain fetal biometric measurements
due to some problems in traditional approach such as lack of consistency and accuracy. The proposed
approach utilizes global & local region information for fetal contour extraction from ultrasonic images.
The main goal of this research is to develop a new methodology to aid the analysis and feature extraction.
Role of Hybrid Level Set in Fetal Contour Extractionsipij
Image processing technologies may be employed for quicker and accurate diagnosis in analysis and
feature extraction of medical images. Here, existing level set algorithm is modified and it is employed for
extracting contour of fetus in an image. In traditional approach, fetal parameters are extracted manually
from ultrasound images. An automatic technique is highly desirable to obtain fetal biometric measurements
due to some problems in traditional approach such as lack of consistency and accuracy. The proposed
approach utilizes global & local region information for fetal contour extraction from ultrasonic images.
The main goal of this research is to develop a new methodology to aid the analysis and feature extraction.
Using Generic Image Processing Operations to Detect a Calibration GridJan Wedekind
Camera calibration is an important problem in 3D computer vision. The problem of determining the camera parameters has been studied extensively. However the algorithms for determining the required correspondences are either semi-automatic (i.e. they require user interaction) or they involve difficult to implement custom algorithms.
We present a robust algorithm for detecting the corners of a calibration grid and assigning the correct correspondences for calibration . The solution is based on generic image processing operations so that it can be implemented quickly. The algorithm is limited to distortion-free cameras but it could potentially be extended to deal with camera distortion as well. We also present a corner detector based on steerable filters. The corner detector is particularly suited for the problem of detecting the corners of a calibration grid.
- See more at: http://figshare.com/articles/Using_Generic_Image_Processing_Operations_to_Detect_a_Calibration_Grid/696880#sthash.EG8dWyTH.dpuf
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
REGION CLASSIFICATION BASED IMAGE DENOISING USING SHEARLET AND WAVELET TRANSF...csandit
This paper proposes a neural network based region classification technique that classifies
regions in an image into two classes: textures and homogenous regions. The classification is
based on training a neural network with statistical parameters belonging to the regions of
interest. An application of this classification method is applied in image denoising by applying
different transforms to the two different classes. Texture is denoised by shearlets while
homogenous regions are denoised by wavelets. The denoised results show better performance
than either of the transforms applied independently. The proposed algorithm successfully
reduces the mean square error of the denoised result and provides perceptually good results.
REGION CLASSIFICATION BASED IMAGE DENOISING USING SHEARLET AND WAVELET TRANSF...cscpconf
This paper proposes a neural network based region classification technique that classifies
regions in an image into two classes: textures and homogenous regions. The classification is
based on training a neural network with statistical parameters belonging to the regions of
interest. An application of this classification method is applied in image denoising by applying
different transforms to the two different classes. Texture is denoised by shearlets while
homogenous regions are denoised by wavelets. The denoised results show better performance
than either of the transforms applied independently. The proposed algorithm successfully
reduces the mean square error of the denoised result and provides perceptually good results.
Similar to Estimating the Crest Lines on Polygonal Mesh Models by an Automatic Threshold (20)
In the era of data-driven warfare, the integration of big data and machine learning (ML) techniques has
become paramount for enhancing defence capabilities. This research report delves into the applications of
big data and ML in the defence sector, exploring their potential to revolutionize intelligence gathering,
strategic decision-making, and operational efficiency. By leveraging vast amounts of data and advanced
algorithms, these technologies offer unprecedented opportunities for threat detection, predictive analysis,
and optimized resource allocation. However, their adoption also raises critical concerns regarding data
privacy, ethical implications, and the potential for misuse. This report aims to provide a comprehensive
understanding of the current state of big data and ML in defence, while examining the challenges and
ethical considerations that must be addressed to ensure responsible and effective implementation.
Cloud Computing, being one of the most recent innovative developments of the IT world, has been
instrumental not just to the success of SMEs but, through their productivity and innovative contribution to
the economy, has even made a remarkable contribution to the economic growth of the United States. To
this end, the study focuses on how cloud computing technology has impacted economic growth through
SMEs in the United States. Relevant literature connected to the variables of interest in this study was
reviewed, and secondary data was generated and utilized in the analysis section of this paper. The findings
of this paper revealed that there have been meaningful contributions that the usage of virtualization has
made in the commercial dealings of small firms in the United States, and this has also been reflected in the
economic growth of the country. This paper further revealed that as important as cloud-based software is,
some SMEs are still skeptical about how it can help improve their business and increase their bottom line
and hence have failed to adopt it. Apart from the SMEs, some notable large firms in different industries,
including information and educational services, have adopted cloud computing technology and hence
contributed to the economic growth of the United States. Lastly, findings from our inferential statistics
revealed that no discernible change has occurred in innovation between small and big businesses in the
adoption of cloud computing. Both categories of businesses adopt cloud computing in the same way, and
their contribution to the American economy has no significant difference in the usage of virtualization.
Energy-constrained Wireless Sensor Networks (WSNs) have garnered significant research interest in
recent years. Multiple-Input Multiple-Output (MIMO), or Cooperative MIMO, represents a specialized
application of MIMO technology within WSNs. This approach operates effectively, especially in
challenging and resource-constrained environments. By facilitating collaboration among sensor nodes,
Cooperative MIMO enhances reliability, coverage, and energy efficiency in WSN deployments.
Consequently, MIMO finds application in diverse WSN scenarios, spanning environmental monitoring,
industrial automation, and healthcare applications.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication. IJCSIT publishes original research papers and review papers, as well as auxiliary material such as: research papers, case studies, technical reports etc.
With growing, Car parking increases with the number of car users. With the increased use of smartphones
and their applications, users prefer mobile phone-based solutions. This paper proposes the Smart Parking
Management System (SPMS) that depends on Arduino parts, Android applications, and based on IoT. This
gave the client the ability to check available parking spaces and reserve a parking spot. IR sensors are
utilized to know if a car park space is allowed. Its area data are transmitted using the WI-FI module to the
server and are recovered by the mobile application which offers many options attractively and with no cost
to users and lets the user check reservation details. With IoT technology, the smart parking system can be
connected wirelessly to easily track available locations.
Welcome to AIRCC's International Journal of Computer Science and Information Technology (IJCSIT), your gateway to the latest advancements in the dynamic fields of Computer Science and Information Systems.
Computer-Assisted Language Learning (CALL) are computer-based tutoring systems that deal with
linguistic skills. Adding intelligence in such systems is mainly based on using Natural Language
Processing (NLP) tools to diagnose student errors, especially in language grammar. However, most such
systems do not consider the modeling of student competence in linguistic skills, especially for the Arabic
language. In this paper, we will deal with basic grammar concepts of the Arabic language taught for the
fourth grade of the elementary school in Egypt. This is through Arabic Grammar Trainer (AGTrainer)
which is an Intelligent CALL. The implemented system (AGTrainer) trains the students through different
questions that deal with the different concepts and have different difficulty levels. Constraint-based student
modeling (CBSM) technique is used as a short-term student model. CBSM is used to define in small grain
level the different grammar skills through the defined skill structures. The main contribution of this paper
is the hierarchal representation of the system's basic grammar skills as domain knowledge. That
representation is used as a mechanism for efficiently checking constraints to model the student knowledge
and diagnose the student errors and identify their cause. In addition, satisfying constraints and the number
of trails the student takes for answering each question and fuzzy logic decision system are used to
determine the student learning level for each lesson as a long-term model. The results of the evaluation
showed the system's effectiveness in learning in addition to the satisfaction of students and teachers with its
features and abilities.
In the realm of computer security, the importance of efficient and reliable user authentication methods has
become increasingly critical. This paper examines the potential of mouse movement dynamics as a
consistent metric for continuous authentication. By analysing user mouse movement patterns in two
contrasting gaming scenarios, "Team Fortress" and "Poly Bridge," we investigate the distinctive
behavioral patterns inherent in high-intensity and low-intensity UI interactions. The study extends beyond
conventional methodologies by employing a range of machine learning models. These models are carefully
selected to assess their effectiveness in capturing and interpreting the subtleties of user behavior as
reflected in their mouse movements. This multifaceted approach allows for a more nuanced and
comprehensive understanding of user interaction patterns. Our findings reveal that mouse movement
dynamics can serve as a reliable indicator for continuous user authentication. The diverse machine
learning models employed in this study demonstrate competent performance in user verification, marking
an improvement over previous methods used in this field. This research contributes to the ongoing efforts to
enhance computer security and highlights the potential of leveraging user behavior, specifically mouse
dynamics, in developing robust authentication systems.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
Image segmentation and classification tasks in computer vision have proven to be highly effective using neural networks, specifically Convolutional Neural Networks (CNNs). These tasks have numerous
practical applications, such as in medical imaging, autonomous driving, and surveillance. CNNs are capable
of learning complex features directly from images and achieving outstanding performance across several
datasets. In this work, we have utilized three different datasets to investigate the efficacy of various preprocessing and classification techniques in accurssedately segmenting and classifying different structures
within the MRI and natural images. We have utilized both sample gradient and Canny Edge Detection
methods for pre-processing, and K-means clustering have been applied to segment the images. Image
augmentation improves the size and diversity of datasets for training the models for image classification
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
This research aims to further understanding in the field of continuous authentication using behavioural
biometrics. We are contributing a novel dataset that encompasses the gesture data of 15 users playing
Minecraft with a Samsung Tablet, each for a duration of 15 minutes. Utilizing this dataset, we employed
machine learning (ML) binary classifiers, being Random Forest (RF), K-Nearest Neighbors (KNN), and
Support Vector Classifier (SVC), to determine the authenticity of specific user actions. Our most robust
model was SVC, which achieved an average accuracy of approximately 90%, demonstrating that touch
dynamics can effectively distinguish users. However, further studies are needed to make it viable option
for authentication systems. You can access our dataset at the following
link:https://github.com/AuthenTech2023/authentech-repo
This paper discusses the capabilities and limitations of GPT-3 (0), a state-of-the-art language model, in the
context of text understanding. We begin by describing the architecture and training process of GPT-3, and
provide an overview of its impressive performance across a wide range of natural language processing
tasks, such as language translation, question-answering, and text completion. Throughout this research
project, a summarizing tool was also created to help us retrieve content from any types of document,
specifically IELTS (0) Reading Test data in this project. We also aimed to improve the accuracy of the
summarizing, as well as question-answering capabilities of GPT-3 (0) via long text
In the realm of computer security, the importance of efficient and reliable user authentication methods has
become increasingly critical. This paper examines the potential of mouse movement dynamics as a
consistent metric for continuous authentication. By analysing user mouse movement patterns in two
contrasting gaming scenarios, "Team Fortress" and "Poly Bridge," we investigate the distinctive
behavioral patterns inherent in high-intensity and low-intensity UI interactions. The study extends beyond
conventional methodologies by employing a range of machine learning models. These models are carefully
selected to assess their effectiveness in capturing and interpreting the subtleties of user behavior as
reflected in their mouse movements. This multifaceted approach allows for a more nuanced and
comprehensive understanding of user interaction patterns. Our findings reveal that mouse movement
dynamics can serve as a reliable indicator for continuous user authentication. The diverse machine
learning models employed in this study demonstrate competent performance in user verification, marking
an improvement over previous methods used in this field. This research contributes to the ongoing efforts to
enhance computer security and highlights the potential of leveraging user behavior, specifically mouse
dynamics, in developing robust authentication systems.
Image segmentation and classification tasks in computer vision have proven to be highly effective using neural networks, specifically Convolutional Neural Networks (CNNs). These tasks have numerous
practical applications, such as in medical imaging, autonomous driving, and surveillance. CNNs are capable
of learning complex features directly from images and achieving outstanding performance across several
datasets. In this work, we have utilized three different datasets to investigate the efficacy of various preprocessing and classification techniques in accurssedately segmenting and classifying different structures
within the MRI and natural images. We have utilized both sample gradient and Canny Edge Detection
methods for pre-processing, and K-means clustering have been applied to segment the images. Image
augmentation improves the size and diversity of datasets for training the models for image classification.
This work highlights transfer learning’s effectiveness in image classification using CNNs and VGG 16 that
provides insights into the selection of pre-trained models and hyper parameters for optimal performance.
We have proposed a comprehensive approach for image segmentation and classification, incorporating preprocessing techniques, the K-means algorithm for segmentation, and employing deep learning models such
as CNN and VGG 16 for classification.
The security of Electric Vehicle (EV) charging has gained momentum after the increase in the EV adoption
in the past few years. Mobile applications have been integrated into EV charging systems that mainly use a
cloud-based platform to host their services and data. Like many complex systems, cloud systems are
susceptible to cyberattacks if proper measures are not taken by the organization to secure them. In this
paper, we explore the security of key components in the EV charging infrastructure, including the mobile
application and its cloud service. We conducted an experiment that initiated a Man in the Middle attack
between an EV app and its cloud services. Our results showed that it is possible to launch attacks against
the connected infrastructure by taking advantage of vulnerabilities that may have substantial economic and
operational ramifications on the EV charging ecosystem. We conclude by providing mitigation suggestions
and future research directions.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
This paper describes the outcome of an attempt to implement the same transitive closure (TC) algorithm
for Apache MapReduce running on different Apache Hadoop distributions. Apache MapReduce is a
software framework used with Apache Hadoop, which has become the de facto standard platform for
processing and storing large amounts of data in a distributed computing environment. The research
presented here focuses on the variations observed among the results of an efficient iterative transitive
closure algorithm when run against different distributed environments. The results from these comparisons
were validated against the benchmark results from OYSTER, an open source Entity Resolution system. The
experiment results highlighted the inconsistencies that can occur when using the same codebase with
different implementations of Map Reduce.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERSveerababupersonal22
It consists of cw radar and fmcw radar ,range measurement,if amplifier and fmcw altimeterThe CW radar operates using continuous wave transmission, while the FMCW radar employs frequency-modulated continuous wave technology. Range measurement is a crucial aspect of radar systems, providing information about the distance to a target. The IF amplifier plays a key role in signal processing, amplifying intermediate frequency signals for further analysis. The FMCW altimeter utilizes frequency-modulated continuous wave technology to accurately measure altitude above a reference point.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Estimating the Crest Lines on Polygonal Mesh Models by an Automatic Threshold
1. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 5, October 2019
DOI: 10.5121/ijcsit.2019.11504 51
ESTIMATING THE CREST LINES ON POLYGONAL
MESH MODELS BY AN AUTOMATIC THRESHOLD
Zhihong Mao, Ruichao Wang, Fan Liu and Yulin Zhou
Division of Intelligent Manufacturing, Wuyi University, Jiangmen529020, China
ABSTRACT
Crest lines convey the inherent features of the shape. Mathematically Crest lines are described via
extremes of the surface principal curvatures along their corresponding lines of curvature. In this study we
used an automatic threshold estimation technique to estimate crest lines. We firstly computed the principal
curvature and corresponding direction for each vertex in the mesh; then we computed the saliency value by
a linear combination of the maximal absolute curvature and the absolute curvature difference; finally, we
automatically determine the threshold to detect the crest lines according to the saliency value. For
illustrative purpose, we demonstrated our method with several examples.
KEY WORDS
Crest Lines, Polygonal Mesh Models, Curvature
1. INTRODUCTION
Digital geometry processing (DGP) is a new branch of computer graphics, which analyses and
processes the polygonal mesh models. Estimating the crest lines is an important area of DGP.
Many applications in mesh processing and analysis require the detection of crest lines, such as
simplification of 3D models, shape matching and shape editing of 3D models, 3D animations,
Medical imaging. Crest lines convey the inherent features of the shape. Mathematically Crest
lines are described via extremes of the surface principal curvatures along their corresponding
lines of curvature, which are traced by using high-order curvature derivatives.
Recent research in 3D computer graphics has focused on estimating the crest lines over triangle
mesh models [1-6]. The basic idea of these papers is first to robustly estimate surface principal
curvature and then to identify the crest lines as lines of curvature extremes. However, most 3D
models are collections of discrete triangular facets, which are not continuous parameterized.
Moreover, natural objects have various morphologies. So it is very difficult to find efficient
functions to describe these shapes, which makes it difficult to detect the crest lines on discrete
triangular mesh.
We present an automatic threshold estimation technique for robustly extracting crest lines on
polygonal mesh models based on the observation that polygonal surface models with features
usually contain many flat regions and a little sharp edges. Firstly, our method computed the
principal curvature and corresponding direction for each vertex in triangular mesh models; then
the method computed the saliency value [7, 8] by a linear combination of the maximal absolute
2. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 5, October 2019
52
curvature and the absolute curvature difference, which can identify regions that are different from
their surrounding
context and reduce ambiguity in noisy circumstances; Finally, our method automatically
determines the threshold to detect the crest lines according to the saliency value.
2. RELATED WORK
Crest lines, sometimes called ridges and valleys or Feature lines, are invariant of transformation,
rotation and scale. They are the basis of mesh shape processing such as shape recognition,
matching and segmentation. With the continuous improvement of 3D scanning technology the
model becomes more and more detailed, making crest lines extraction techniques increasingly
popular in computer graphics [16, 17, 23]. Lee presented an image-based method to extract crest
lines on a rendered projection image by edge detection algorithm in image processing [16].
Although the algorithm is visually more pleasant and less computational complexity, the accuracy
of pixel-based representation method is lower. Model-based method extracts lines directly on
polygonal mesh models based on curvature, the differential geometric property of the surface.
Briefly, Curvature estimation methods can be categorized as follows: 1) Normal curvature
approximation method where the Weingarten matrix is used to estimate the principal curvature
and direction [9, 10]. 2) Surface (curve) fitting method where a polynomial surface (curve) is
fitted to points in a local region [11, 12]. 3) Discrete differential geometry method [13, 14] where
discrete versions of differential geometry theorems, such as the Laplace-Beltrami operator and
Gauss-Bonnet theorem, are developed and applied to the neighbourhood of each vertex. Now
model-based method dominates the field of 3D computer graphics. There are a number of lines
that serve to extract the geometric linear features of the surface. The first class is view-dependent
curves. The silhouette (contour) is the curves that are only defined with respect to a viewing
screen. Suggestive contours are contours that would first appear with a minimal change in
viewpoint [18]. Apparent ridges are defined as the ridges of view-dependent curvature, the
variation of surface normal with respect to a viewing screen plane [6]. View-dependent curves
depend on the viewing direction and produce visually pleasing line drawings. So they are usually
used for non-photorealistic rendering methods. The second class is view-independent curves that
depend only on the differential geometric properties of the surface. Many researchers have tried
to depict the shape of the 3D model with a high-quality crest lines. But the features are traced
using high-order curvature derivatives and are not easy to be detected from discrete surface. The
most common curves are ridges and valleys which occur at points of extremal principal
curvatures. This paper focuses on the problem of accurately detecting crest lines on surfaces.
Ohtake et al. [2] proposed a method for detecting ridge-valley lines by combining multi-level
implicit surface fitting and finite difference approximations. Because the method involves
computing curvature tensors and their derivatives at each vertex, it is time-consuming. Yoshizawa
[3] detected the crest lines based on a modification of Ohtake’s method. The method reduced the
computation times since Yoshizawa’s method estimated the surface derivatives via local
polynomial fitting. Soo-Kyun Kim [4] found ridges and valleys in a discrete surface using a
modified MLS approximation. The algorithm was quick because the modified MLS
approximation exploited locality. Stylianou and Farin [5] presented a reliable, automatic method
for extracting crest lines from triangulated meshes. The algorithm identified every crest point, and
then joined them using region growing and skeletonization.
3. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 5, October 2019
53
However, a purely curvature-based metric may not necessarily be a good metric of crest lines
extraction. Non-uniform sampling, noise and irregularities of triangulation make the curvature-
based
method get false crest lines or incomplete crest lines. Saliency map [15] that assigns a saliency
value to each image pixel has done excellent work in image processing. By the saliency value,
salient image edges will be distinct from their surroundings. Encouraged by the success of the
method on 2D problem, Lee [7] introduces the idea of mesh saliency as a measure of regional
importance for graphics meshes. Lee discusses how to apply the saliency value to graphics
applications such as mesh simplification and viewpoint selection. Ran Gal [8] defined salient
geometric features for partial shape matching and similarity by the salient parts of an object. By
“salience of a part”, he captured the object’s shape with a small number of parts. He proposed
that the salience of a part depends on (at least) two factors: its size relative to the whole object,
the number of curvature changes and strength. Similar to the two methods, we will compute the
saliency value of each mesh point for our automatic algorithm.
Other types of curves aren’t defined as the maximum of curvature, but defined as the zero
crossings of some function of curvature. Demarcating curves [19] are the loci of points for which
there is a zero crossing of the curvature in the curvature gradient direction. Demarcating curves
can be viewed as the curves that typically separate ridges and valleys on 3D surface. Relief edges
[20] are defined as the zero crossing of the normal curvature in the direction perpendicular to the
edge. Compared to view-dependent curves, view-independent curves are locked to the object
surface, and do not slide when the viewpoint changes. Moreover, they are pure geometrical and
convey prominent and meaningful information about the shape. Our approach is closely related to
traditional ridges and valleys.
3. PRELIMINARIES
A good introduction to differential geometry can be found in [21]. We just recall some notions of
differential geometry, useful for the introduction of the extraction of crest lines on 3D meshes.
Figure 1. Normal section of surface S.
Differential geometry of a 2D manifold embedded in 3D is the study of the intrinsic properties of
the surface. Here, we will recall basic notions of differential geometry required in this paper. The
unit normal N of a surface S at p is the vector perpendicular to S , i.e., the tangent plane of S
at p . A normal section curve at p is constructed by intersecting S with a plane normal to it, i.e.,
a plane that contains N and a tangent direction T . The curvature of this curve is the normal
curvature of S in the direction T (See Fig.1).
4. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 5, October 2019
54
Figure 2. Comparison of the detection algorithm by saliency values with the algorithm only by principal
curvatures. Left: Detect the feature lines only by principal curvatures; Right: Detect the feature lines by
saliency values, a measure of regional importance.
For a smooth surface, the normal curvature in direction V is VVV IIk T
)( , where the symmetric
matrix II is the second fundamental form. The eigenvalues of II are the principal curvature
values ),( minmax kk . The eigenvectors of II are the coordinates of the principal curvature
directions ),( minmax tt . Let maxe and mine be the derivatives of the principal curvatures minmax , kk
along their corresponding curvatures directions minmax ,tt . Mathematically crest lines are
described via extrema of the surface principal curvatures along their corresponding lines of
curvature:
||,0,0 minmaxmaxmaxmax kkee t (Ridges) (1)
||,0,0 maxminminminmin kkee t (Valleys) (2)
Figure 3. The saliency values can capture the interesting features at all perceptually meaningful scales and
reveals the difference between the vertex and its surrounding context. Left: Visualize the saliency values of
each mesh points with colour. Right: Visualize the principal curvature (max) of each mesh points with
colour.
5. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 5, October 2019
55
4. AUTOMATIC METHOD FOR FEATURE DETECTION
4.1 Detect the Crest Points Via Local Polynomial Fitting
In order to achieve an accurate estimation of the crest lines, we used Stylianou[5] crest point
classification method to determine whether the main curvature kmax(kmin) of mesh points
reaches the maximum (minimum) value along its curvature line.
1. Calculate the intersection of the normal cross section, containing tmax and N, and the
neighborhood polygon of the target point p. Assuming that they intersect at the points a and
b, find the two ends of the edge where the points a and b are. For example, the two ends, pi
and pi+1, of the edge where the point a is.
2. The principal curvatures of the points of intersection a and b are calculated by linear
interpolation. For example, the principal curvature a
kmax of the point a is obtained by ip
kmax and
1
max
ip
k of the two ends pi and pi+1.
3. if 0)()( 2
max
2
max ap
kk i
, 0)()( 2
max
2
max bp
kk i
and maxmax Tk p
,then p is a crest
point. Here, 0max T is the threshold.
Computing of the crest points involves estimation of high-order surface derivatives, so these
surface features are very sensitive to noise and irregularities of the triangulation (see Fig.2).
4.2 Modify the Detection Algorithm by Saliency Values
Mesh saliency, a measure of regional importance, can identify regions that are different from their
surrounding context and reduce ambiguity in noisy circumstances. In this paper, the computation
of the salience value is built on the methods of the salience of visual parts proposed by Ran Gal
[8] and mesh saliency proposed by Lee [7]. Differing from the salient parts, we compute a
saliency value of each mesh point to identify mesh points that are different from their surrounding
context. We have modified their algorithms slightly to define a saliency value S as a linear
combination of two terms:
)()( pp VarWCurvWS 21 (5)
Where )(pCurv is the maximal absolute curvature of a point p and )(pVar is the absolute
curvature difference. The first term of equation (5) expresses the saliency of the mesh point. The
second term expresses the degree of the difference between the point and its surrounding context.
We use 0.4 for 1W and 0.6 for 2W . Let )(pk denote the maximal absolute value of the principal
curvatures and ))(( pkG denote the Gaussian-weighted average of )(pk ,
)(
22
)(
22
)]2/(||exp[
)]2/(||exp[)(
))((
pneighborx
pneighborx
k
kG
px
pxx
p (6)
6. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 5, October 2019
56
is a scale factor that is estimated by e , where e is the average length of edges of the
mesh. We compute the saliency value )(pi of a vertex p as the absolute difference between the
Gaussian-weighted averages ))(( pkG computed at the two neighboring boundaries:
|)),(()1),((|)( KpkGKpkGpK (7)
)( pVar is computed by an average value at multiple scales:
n
1K
)()( pKpVar (8)
Where n is set 3 or 4. We define a salient geometric feature point as a measure of regional
importance which is salient and interesting compared to its neighborhood. The top graded points
define the salient geometric feature of a given shape. So it is better than the method extracting the
crest lines only by the curvature (see Fig.2 and Fig.3).
Figure 4. The comparison of the saliency algorithm with different thresholds. Left: Top 30% for ridge
points, bottom 30% for valley points;Middle: Top 20% for ridge points, bottom 25% for valley
points;Right: Top 8%for ridge points, bottom 15% for valley points.
4.3 An Automatic Crest Points Detection Method
Suppose that the surface points are composed of feature points and flat points. One obvious way
to extract the feature points on a mesh is to select a threshold T that separates these mesh points.
Because of its intuitive properties and simplicity of implementation, threshold method enjoys a
central position in applications of the extraction of the surface feature points. Unfortunately it is
difficult for a user to set the threshold if no any apriori knowledge (see Fig.4 and Fig.6)
Our automatic algorithm is based on the observation that polygonal surface models with features
usually contain many flat regions and a little sharp edges. Sorting the mesh saliency value of each
point by the ascending order, we can find: The saliency values begin to increase slowly and the
plot almost corresponds to a planar line, after arriving to a value, the plot rises steeply. Obviously,
flat regions correspond to the planar line part of the plot and sharp edges correspond to the steep
7. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 5, October 2019
57
line part of the plot. So this value is the optimal threshold to extract the feature points (see Fig.5).
Finally, we summarize the automatic crest lines extraction algorithm as follows:
1. Estimate necessary surface curvature and their derivatives via local polynomial fitting.
2. Compute the saliency values as a measure of regional importance for graphics meshes.
3. Seek the optimal threshold by the analysis mentioned above.
4. Extract the feature points by the threshold.
For step 3, we give the pseudo-ode as follow:
Procedure SeekThreshold(saliency)
1. Quicksort (saliency) /* Sort the mesh saliency value of each point.*/
2. for i = vertex_num / 2 to vertex_num step k
/* vertex_num represents the number of mesh vertices. Firstly, we search the value
at a coarse scale, usually set /200vertex_numk . */
2.1. im = saliency[i] – saliency[i-1]; 1im = saliency[i-1] – saliency[i-2]
2.2 if im > 1.3* 1im , then shrink the search range in size and repeat step 2 at a finer scale
till finding the threshold. /*From Fig.5 we can find the plot rises steeply, we used 1.3
for
the coefficient.*/
4.4 Generation of Crest Lines
Once the crests have been detected, we need to connect them together. We follow the procedure
proposed in [4] with an addition which can reduce the fragmentation of the crest lines:
1) Get the optimal threshold in section 4.3. Flag the vertex which saliency value is greater
than the threshold T as feature points set 1M . Flag the vertex which saliency value is
less than T and greater than 0.8*T as weak feature points set 2M . Define k-neighbor of
a point p as ),( kN p .
2) For a feature point p , examine the point q in )1,(pN . If only one point 1Mq , then
connect it to p .
3) If two or more points 2and,...,1, nni1i Mq , then we connected p to one of them
by following the vertex iq corresponding to the smaller dihedral angle with the
orientation of the principal curvature.
4) No any point in 1M , but at least one point 2Mq . If having
)1,(and)1,( 1 pkkqk NMandN , then connect p to q similarly following the
rule in step 2 and step 3.
Repeat step 2 to step 4.
8. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 5, October 2019
58
Figure 5 Sort the mesh saliency value of each point by the ascending order, we can get a plot: Firstly, the
plot almost corresponds to a planar line, after arriving to a value, the plot rises steeply. Left: 3D model;
Right: The plot corresponding to the mesh saliency values by the ascending order.
5. RESULTS
For evaluating the effectiveness of our automatic mesh saliency method, this section shows
results of our algorithm and compares it to the user-specified threshold algorithm. All of our tests
were run on a PC with Intel® Core™ 2 1.73.GHz processor and 1.0 GB of main memory.
Figure 6 The comparison of the saliency algorithms with different thresholds and our automatic detection
algorithm. Upper Left: Top 25% for ridge points, bottom 30% for valley points;Upper Right: Top 15%
for ridge points, bottom 20% for valley points;Bottom Left: Top 5% for ridge points, bottom 5% for
valley points; Bottom Right: The automatic detection algorithm.
Fig.2 and 3 show some results. Fig.2 shows the comparison of the detection algorithm by saliency
values and the algorithm only by principal curvatures. Owing to its locality, the method only by
principal curvature is sensitive to noise and irregularities of the triangulation and usually
produces spurious crest lines. The result shows on the left of Fig.2. Mesh saliency that measure
the region importance at multiple scales in the neighborhood can reveal the difference between
the vertex and its surrounding context. So it can extract the most salient features points robustly.
The result on the right shows that the lines are clearly detected by saliency values. In Fig. 3, we
9. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 5, October 2019
59
visualize the magnitude of principal curvature value and the saliency value of each point with
color on a bunny model. Warm color corresponds to the sharp features and cool color corresponds
to the flat regions. We can find that saliency values differentiate the neck and the leg from their
circumferences.
Fig.4 and Fig.6 show the comparison between the different thresholds. The surface points are
composed of feature points and flat points. One obvious way to extract the feature points on the
surface is to select a thresholdT , for example, TOP 20% means that the feature points are the top
20% points with high saliency values. But improper threshold produces many spurious crest lines.
Polygonal surface models can be roughly divided into two classes: CAD-like models showed in
Fig.6, which usually have large flat regions; Non-CAD-like model showed in Fig.4, which have
many fine details. We can’t find a unified threshold for the detection method. So how to decide
the threshold is a problem for user-specified threshold methods.
Fig.5 gives an analysis for the optimal threshold. On the left is the 3D model; on the right is the
corresponding plot of the saliency values in ascending order. The plot begins to rise slowly and
almost corresponds to a planar line, after arriving to a value (Flagged in red circle dot), the plot
rises steeply. The value at the red circle dot is the threshold we need. Fig.6 shows results from the
fandisk CAD model, in which our automatic method works well. Moreover, our method doesn’t
need a user to select the threshold. It is not easy for a user to modify the threshold if no any
apriori knowledge. Ohtake’s method [2, 22] is a reliable detection of ridge-valley structures on
surfaces approximated by dense triangle meshes and has become a representative method of crest
lines detection algorithm. Fig.7 shows that our automatic method almost has the same precise as
Ohtake’s method.
6. CONCLUSION
This paper has presented an automatic algorithm for the detection of crest lines on triangular
meshes basesssssd on the concept of salient geometric features. The utility of saliency values for
robustly detecting the crest lines on surface has been demonstrated. The results show saliency
values effectively capture important shape features.
Figure 7 Our automatic algorithm VS. Ohtake’s method, left: Our automatic algorithm; right: Ohtake’s
method.
Our automatic algorithm is a fully automatic method. It can advantageously select a “good”
threshold without any user intervention. The results show that our automatic algorithm can find
an optimal threshold to detect the crest lines on 3D meshes. In the future we intend to develop
data clustering method into the field of the crest lines detection.
10. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 5, October 2019
60
ACKNOWLEDGEMENT
This study was supported by the Innovation projects of Department of Education of Guangdong
Province, China (NO.2017KTSCX183) and the Funding for Introduced Innovative R&D Team
Program of Jiangmen (Grant No.2018630100090019844).
REFERENCES
[1] Forrester Cole & Kevin Sanik, (2009) “How Well Do Line Drawings Depict Shape?”, ACM
Transaction on Graphics, Vol. 28, No.3, pp43-51.
[2] Ohtake Y. & Belyaev A., (2004) “Ridge-valley Lines on Meshes via Implicit Surface Fitting”, ACM
Transactions on Graphics, Vol. 23, No. 3, pp609-612.
[3] Shin Yoshizawa & Alexander Belyaev, (2005) “Fast and Robust Detection of Crest Lines on
Meshes”, Symposium on Solid and Physical Modeling’05, pp227-232.
[4] Soo-Kyun Kim & Chang-Hun Kim, (2006) “Finding Ridges and Valleys in A Discrete Surface
Using A Modified MLS Approximation”, Computer-Aided Design, Vol. 38, No.2, pp173-180.
[5] Georgios Stylianou & Gerald Farin, (2004) “Crest Lines for Surface Segmentation and Flattening”,
IEEE Transaction on Visualization and Computer Graphics, Vol. 10, No. 5, pp536-543.
[6] Tilke Judd & Fredo Durand, (2007) “Apparent ridges for line drawing”, ACM Transactions on
Graphics, Vol. 26, No. 3, pp19-26.
[7] Chang Ha Lee & Amitabh Varshney, (2005) “Mesh Saliency”. Proceedings of ACM Siggraph’05,
pp659-666.
[8] Ran Gal & Daniel Cohen-Or, (2006) “Salient Geometric Features for Partial Shape Matching and
Similarity”, ACM Transactions on Graphics, Vol. 25, No. 1, pp130-150.
[9] Taubin G, (1995) “Estimating the Tensor of Curvature of a Surface from a Polyhedral
Approximation”, In Proceedings of Fifth International Conference on Computer Vision’95, pp902-
907.
[10] Sachin Nigam & Vandana Agrawal, (2013) “A Review: Curvature approximation on triangular
meshes”, Int. J. of Engineering science and Innovative Technology, Vol. 2, No. 3, pp330-339
[11] Xunnian Yang & Jiamin Zheng, (2013) “Curvature tensor computation by piecewise surface
interpolation”, Computer Aided Design, Vol. 45, No. 12, pp1639-1650.
[12] Gady Agam & Xiaojing Tang, (2005) “A Sampling Framework for Accurate Curvature Estimation
in Discrete Surfaces”, IEEE Transaction on Visu alization and Computer Graphics, Vol. 11, No. 5,
pp573-582.
[13] Meyer M. & Desbrun M., (2003) “Discrete Differential-geometry Operators for Triangulated 2-
manifolds”, In Visualization and Mathematics III’03, pp35-57.
[14] Stupariu & Mihai-Sorin, (2016) “An application of triangle mesh models in detecting patterns of
vegetation”, WSCG’2016, pp87-90.
11. International Journal of Computer Science & Information Technology (IJCSIT) Vol 11, No 5, October 2019
61
[15] Chen L. & Xie X., (2003) “A visual attention model for adapting images on small displays”, ACM
Multimedia Systems Journal, Vol. 9, No. 4, pp353-364.
[16] Lee, Y. & Markosian, L., (2007) “Line drawings via abstracted shading”, ACM Transactions on
Graphics, Vol. 26, No. 3, pp1-9.
[17] Chen, J. S.-S. and H.-Y. Feng, (2017) “Idealization of scanning-derived triangle mesh models of
prismatic engineering parts”, International Journal on Interactive Design and Manufacturing, Vol.
11, No. 2, pp205-221.
[18] Decarlo D. & Finkelstein A., (2003) “Suggestive Contours for Conveying Shape”, ACM
Transactions on Graphics, Vol.22, No. 3, pp848-855.
[19] M. Kolomenkin & I. Shimshoni, (2008) “Demarcating curves for shape illustration”, ACM
Transactions on Graphics, Vol.27, No.5, pp157-166.
[20] M. Kolomenkin & I. Shimshoni, (2009) “On Edge Detection on Surface”, 2012 IEEE Conference on
Computer Vision and Pattern Recognition, pp2767-2774.
[21] M. P. Do Carmo (2004) Differential geometry of curves and surfaces, Book, China Machine Press.
[22] A. Belyaev & P.-A. Fayolle, (2013) “Signed Lp-distance fields”, CAD, Vol.45, No. 2, pp523-528.
[23] Y Zhang & G Geng, (2016) “A statistical approach for extraction of feature lines from point clouds”,
Computers & Graphics, Vol. 56, No. 3, pp31-45.
AUTHORS:
Zhihong Mao received his PhD degree at Department of Computer Science and
Engineering, Shanghai Jiao Tong University. He is now a teacher at Division of
Intelligent Manufacturing, Wuyi University, China. His research interests include
computer aided design and Digital Geometry Processing.
Ruichao Wang received his PhD degree at Department of Mechanical and
Automotive Engineering, South China University of Technology. He is now a
teacher at Division of Intelligent Manufacturing, Wuyi University, China. His
research interests include Intelligent manufacturing and industrial robot
technology, precision numerical control technology, digital power inverter
technology
Fan Liu was born in 1996. He is now a Master Candidate at Division of
Intelligent Manufacturing, Wuyi University, China. His research interests include
Reverse Engineering.
Yulin Zhou was born in 1995. She is now a Master Candidate at Division of
Intelligent Manufacturing, Wuyi University, China. Her research interests include
computer aided design.