Since the skeleton represents the topology structure of the query sketch and 2D views of 3D model, this paper proposes a novel sketch-based 3D model retrieval algorithm which utilizes skeleton characteristics as the features to describe the object shape. Firstly, we propose advanced skeleton strength map (ASSM) algorithm to create the skeleton which computes the skeleton strength map by isotropic diffusion on the gradient vector field, selects critical points from the skeleton strength map and connects them by Kruskal's algorithm. Then, we propose histogram feature comparison algorithm which adopts the radii of the disks at skeleton points and the lengths of skeleton branches to extract the histogram feature, and compare the similarity between two skeletons using the histogram feature matrix of skeleton endpoints. Experiment results demonstrate that our approach which combines these two algorithms significantly outperforms several leading sketch-based retrieval approaches.
GRAPH PARTITIONING FOR IMAGE SEGMENTATION USING ISOPERIMETRIC APPROACH: A REVIEWDrm Kapoor
Graph cut is fast method performing a binary segmentation. Graph cuts proved to be a useful multidimensional optimization tool which can enforce piecewise smoothness while preserving relevant sharp discontinuities. This paper is mainly intended as an application of isoperimetric algorithm of graph theory for image segmentation and analysis of different parameters used in the algorithm like generating weights, regulates the execution, Connectivity Parameter, cutoff, number of recursions. We present some basic background information on graph cuts and discuss major theoretical results, which helped to reveal both strengths and limitations of this surprisingly versatile combinatorial algorithm.
ICVG: PRACTICAL CONSTRUCTIVE VOLUME GEOMETRY FOR INDIRECT VISUALIZATIONijcga
The task of creating detailed three dimensional virtual worlds for interactive entertainment software can be simplified by using Constructive Solid Geometry (CSG) techniques. CSG allows artists to combine primitive shapes, visualized through polygons, into complex and believable scenery. Constructive Volume Geometry (CVG) is a super-set of CSG that operates on volumetric data, which consists of values recorded at constant intervals in three dimensions of space. To allow volumetric data to be integrated into existing frameworks, indirect visualization is performed by constructing and visualizing polygon meshes corresponding to the implicit surfaces in the volumetric data. The Indirect CVG (ICVG) algebra, which provides constructive volume geometry operators appropriate to volumetric data that will be indirectly visualized is introduced. ICVG includes operations analogous to the union, difference, and intersection operators in the standard CVG algebra, as well as new operations. Additionally, a series of volumetric primitives well suited to indirect visualization is defined.
ICVG : Practical Constructive Volume Geometry for Indirect Visualization ijcga
The task of creating detailed three dimensional virtual worlds for interactive entertainment software can be simplified by using Constructive Solid Geometry (CSG) techniques. CSG allows artists to combine primitive shapes, visualized through polygons, into complex and believable scenery. Constructive Volume Geometry (CVG) is a super-set of CSG that operates on volumetric data, which consists of values recorded at constant intervals in three dimensions of space. To allow volumetric data to be integrated into existing frameworks, indirect visualization is performed by constructing and visualizing polygon meshes
corresponding to the implicit surfaces in the volumetric data. The Indirect CVG (ICVG) algebra, which
provides constructive volume geometry operators appropriate to volumetric data that will be indirectly visualized is introduced. ICVG includes operations analogous to the union, difference, and intersection operators in the standard CVG algebra, as well as new oper
Robust Adaptive Threshold Algorithm based on Kernel Fuzzy Clustering on Image...cscpconf
Using thresholding method to segment an image, a fixed threshold is not suitable if the
background is rough here, we propose a new adaptive thresholding method using KFCM. The
method requires only one parameter to be selected and the adaptive threshold surface can be
found automatically from the original image. An adaptive thresholding scheme using adaptive
tracking and morphological filtering. KFCM algorithm computes the fuzzy membership values
for each pixel. Our method is good for detecting large and small images concurrently. It is also
efficient to denoise and enhance the responses of images with low local contrast can be detected. The efficiency and accuracy of the algorithm is demonstrated by the experiments on the MR brain images.
Hierarchical Vertebral Body Segmentation Using Graph Cuts and Statistical Sha...IJTET Journal
Abstract— Bone Mineral Density (BMD) estimations and fracture investigation of the spine bones are retrained to the vertebral bodies (VBs).A contemporary shape and appearance based method is proposed to segment VBs in clinical Computed Tomography (CT) images without any user arbitration. The proposed approach depends on both image appearance and shape information. Shape knowledge is aggregated from a set of training shapes. Then shape variations are estimated using statistical shape model which approximates the shape variations of the vertebral bodies and its background in the variability region. To segment a VB, the graph cut method used to detect the VB region automatically. Detected contours are aligned and mean shape model is created. The spatial interaction between the neighboring pixels is identified. The statistical shape model is used to produce the deformable shape model and all instances of the shape lies with the current estimate of the mean shape.
Efficient 3D stereo vision stabilization for multi-camera viewpointsjournalBEEI
In this paper, an algorithm is developed in 3D Stereo vision to improve image stabilization process for multi-camera viewpoints. Finding accurate unique matching key-points using Harris Laplace corner detection method for different photometric changes and geometric transformation in images. Then improved the connectivity of correct matching pairs by minimizing
the global error using spanning tree algorithm. Tree algorithm helps to stabilize randomly positioned camera viewpoints in linear order. The unique matching key-points will be calculated only once with our method.
Then calculated planar transformation will be applied for real time video rendering. The proposed algorithm can process more than 200 camera viewpoints within two seconds.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
GRAPH PARTITIONING FOR IMAGE SEGMENTATION USING ISOPERIMETRIC APPROACH: A REVIEWDrm Kapoor
Graph cut is fast method performing a binary segmentation. Graph cuts proved to be a useful multidimensional optimization tool which can enforce piecewise smoothness while preserving relevant sharp discontinuities. This paper is mainly intended as an application of isoperimetric algorithm of graph theory for image segmentation and analysis of different parameters used in the algorithm like generating weights, regulates the execution, Connectivity Parameter, cutoff, number of recursions. We present some basic background information on graph cuts and discuss major theoretical results, which helped to reveal both strengths and limitations of this surprisingly versatile combinatorial algorithm.
ICVG: PRACTICAL CONSTRUCTIVE VOLUME GEOMETRY FOR INDIRECT VISUALIZATIONijcga
The task of creating detailed three dimensional virtual worlds for interactive entertainment software can be simplified by using Constructive Solid Geometry (CSG) techniques. CSG allows artists to combine primitive shapes, visualized through polygons, into complex and believable scenery. Constructive Volume Geometry (CVG) is a super-set of CSG that operates on volumetric data, which consists of values recorded at constant intervals in three dimensions of space. To allow volumetric data to be integrated into existing frameworks, indirect visualization is performed by constructing and visualizing polygon meshes corresponding to the implicit surfaces in the volumetric data. The Indirect CVG (ICVG) algebra, which provides constructive volume geometry operators appropriate to volumetric data that will be indirectly visualized is introduced. ICVG includes operations analogous to the union, difference, and intersection operators in the standard CVG algebra, as well as new operations. Additionally, a series of volumetric primitives well suited to indirect visualization is defined.
ICVG : Practical Constructive Volume Geometry for Indirect Visualization ijcga
The task of creating detailed three dimensional virtual worlds for interactive entertainment software can be simplified by using Constructive Solid Geometry (CSG) techniques. CSG allows artists to combine primitive shapes, visualized through polygons, into complex and believable scenery. Constructive Volume Geometry (CVG) is a super-set of CSG that operates on volumetric data, which consists of values recorded at constant intervals in three dimensions of space. To allow volumetric data to be integrated into existing frameworks, indirect visualization is performed by constructing and visualizing polygon meshes
corresponding to the implicit surfaces in the volumetric data. The Indirect CVG (ICVG) algebra, which
provides constructive volume geometry operators appropriate to volumetric data that will be indirectly visualized is introduced. ICVG includes operations analogous to the union, difference, and intersection operators in the standard CVG algebra, as well as new oper
Robust Adaptive Threshold Algorithm based on Kernel Fuzzy Clustering on Image...cscpconf
Using thresholding method to segment an image, a fixed threshold is not suitable if the
background is rough here, we propose a new adaptive thresholding method using KFCM. The
method requires only one parameter to be selected and the adaptive threshold surface can be
found automatically from the original image. An adaptive thresholding scheme using adaptive
tracking and morphological filtering. KFCM algorithm computes the fuzzy membership values
for each pixel. Our method is good for detecting large and small images concurrently. It is also
efficient to denoise and enhance the responses of images with low local contrast can be detected. The efficiency and accuracy of the algorithm is demonstrated by the experiments on the MR brain images.
Hierarchical Vertebral Body Segmentation Using Graph Cuts and Statistical Sha...IJTET Journal
Abstract— Bone Mineral Density (BMD) estimations and fracture investigation of the spine bones are retrained to the vertebral bodies (VBs).A contemporary shape and appearance based method is proposed to segment VBs in clinical Computed Tomography (CT) images without any user arbitration. The proposed approach depends on both image appearance and shape information. Shape knowledge is aggregated from a set of training shapes. Then shape variations are estimated using statistical shape model which approximates the shape variations of the vertebral bodies and its background in the variability region. To segment a VB, the graph cut method used to detect the VB region automatically. Detected contours are aligned and mean shape model is created. The spatial interaction between the neighboring pixels is identified. The statistical shape model is used to produce the deformable shape model and all instances of the shape lies with the current estimate of the mean shape.
Efficient 3D stereo vision stabilization for multi-camera viewpointsjournalBEEI
In this paper, an algorithm is developed in 3D Stereo vision to improve image stabilization process for multi-camera viewpoints. Finding accurate unique matching key-points using Harris Laplace corner detection method for different photometric changes and geometric transformation in images. Then improved the connectivity of correct matching pairs by minimizing
the global error using spanning tree algorithm. Tree algorithm helps to stabilize randomly positioned camera viewpoints in linear order. The unique matching key-points will be calculated only once with our method.
Then calculated planar transformation will be applied for real time video rendering. The proposed algorithm can process more than 200 camera viewpoints within two seconds.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Automatic rectification of perspective distortion from a single image using p...ijcsa
Perspective distortion occurs due to the perspective projection of 3D scene on a 2D surface. Correcting the distortion of a single image without losing any desired information is one of the challenging task in the field of Computer Vision. We consider the problem of estimating perspective distortion from a single still image of an unstructured environment and to make perspective correction which is both quantitatively accurate as well as visually pleasing. Corners are detected based on the orientation of the image. A method based on plane homography and transformation is used to make perspective correction. The algorithm infers frontier information directly from the images, without any reference objects or prior knowledge of the camera parameters. The frontiers are detected using geometric context based segmentation. The goal of this paper is to present a framework providing fully automatic and fast perspective correction.
For the processing of data such as with 3D printing, Virtual Reality (VR) and
Augmented Reality (AR), there is a need to seek technology which accurately and quickly analyzes the
three-dimensional structures including that of complicated 3D forms. However, unlike in 2D situations
when there are few data points, there is not yet an established method for processing it quickly for 3D
forms due to the fact the objects constructing it are complicated as well as the fact that there is a lot of
data points within the space. Generally, when illustrating a complicated form, a method is used
whereby an object with the complicated form is generated using several primitive shapes. This method
is used in various 3D modelling software because the position of the object can be intuitively and
freely changed and since it can be easily written within DirectX or Java 3D, OpenGL, etc. In this
thesis, it was shown that by using GPGPU (General-Purpose computing on Graphics Processing
Units) in respect of an algorithm with a solid angle, the inside-outside judgement could be conducted
quickly. Specifically, a measurement of inside-outside judgement processing was made for
complicated shapes created from several primitive shapes as well as the measurement of processing
time of several primitive shapes.
An iterative morphological decomposition algorithm for reduction of skeleton ...ijcsit
Shape representation is an important aspect in image processing and computer vision. There are several skeleton transforms that lead to morphological shape representation algorithm. One of the main problems with these algorithms is in selecting the skeleton points that represent the shape component. If the numbers of skeleton subsets are reduced then the reconstruction process will be easy and time consuming. The present paper proposes a skeleton scheme that selects skeleton points based on the largest shape element. By this, overall skeleton subsets will be reduced. The present method is applied on various images and is compared with generalized skeleton transform and octagon-generating decomposition algorithm.
Enhanced target tracking based on mean shift algorithm for satellite imageryeSAT Journals
Abstract Target tracking in high resolution satellite images is challenging task for computer vision field. In this paper we have proposed a mean shift algorithm based enhanced target tracking system for high resolution satellite imagery. In proposed tracking algorithm, Target modeling is done using spectral features of target object i.e. Mean & Energy density function. Feature Vector space with minimum Euclidean Distance is used for predicting next possible position of target object in consecutive frames. Proposed tracking algorithm has been tested using two high resolution databases i.e. Harbor & Airport region database acquired by WorldView-2 satellite at different times. Recall, Precision & F1 score etc. performance parameters are also calculated for showing the tracking ability of the proposed method in real-time applications and are compared with the results of Regional Operator Design based tracking algorithm proposed in [1]. The results show that our proposed method gives relatively better performance than the other tracking algorithms used in satellite imagery. Keywords- Target tracking, Mean shift algorithm, Energy density function, Feature Vector Space, Frame
Scalable and efficient cluster based framework for multidimensional indexingeSAT Journals
Abstract Indexing high dimensional data has its utility in many real world applications. Especially the information retrieval process is dramatically improved. The existing techniques could overcome the problem of “Curse of Dimensionality” of high dimensional data sets by using a technique known as Vector Approximation-File which resulted in sub-optimal performance. When compared with VA-File clustering results in more compact data set as it uses inter-dimensional correlations. However, pruning of unwanted clusters is important. The existing pruning techniques are based on bounding rectangles, bounding hyper spheres have problems in NN search. To overcome this problem Ramaswamy and Rose proposed an approach known as adaptive cluster distance bounding for high dimensional indexing which also includes an efficient spatial filtering. In this paper we implement this high-dimensional indexing approach. We built a prototype application to for proof of concept. Experimental results are encouraging and the prototype can be used in real time applications. Index Terms–Clustering, high dimensional indexing, similarity measures, and multimedia databases
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Application of Multiple Kernel Support Vector Regression for Weld Bead Geomet...IJECEIAES
Modelling and prediction of weld bead geometry is an important issue in robotic GMAW process. This process is highly non-linear and coupled multivariable system and the relationship between process parameters and weld bead geometry cannot be defined by an explicit mathematical expression. Therefore, application of supervised learning algorithms can be useful for this purpose. Support vector machine is a very successful approach to supervised learning. In this approach, a higher degree of accuracy and generalization capability can be obtained by using the multiple kernel learning framework, which is considered as a great advantage in prediction of weld bead geometry due to the high degree of prediction accuracy required. In this paper, a novel approach for modelling and prediction of the weld bead geometry, based on multiple kernel support vector regression analysis has been proposed, which benefits from a high degree of accuracy and generalization capability. This model can be used for proper selection of welding parameters in order to obtain a desired weld bead geometry in robotic GMAW process.
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
Linearity of Feature Extraction Techniques for Medical Images by using Scale ...ijtsrd
In Machine Learning, Pattern Recognition and in the field of image processing, Feature Extraction starts from an initial set of the measured data. Builds derived values are intended to be informative and non redundant, facilating the subsequent learning and in some cases leading to the better human interpretations. Feature Extraction is a dimensionally reduction process, where an initial set of raw variables has been reduced to more manageable groups. Many data analysis software packages provide for feature extraction and for dimension reduction. Determining a subset of the initial features is also known as feature extraction. Common Numerical programming environments are MATLAB, SciLab, NumPy, etc. Ramar S | Keerthiswaran V | Karthik Raj S S "Linearity of Feature Extraction Techniques for Medical Images by using Scale Invariant Feature Transform" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30358.pdf Paper Url :https://www.ijtsrd.com/engineering/bio-mechanicaland-biomedical-engineering/30358/linearity-of-feature-extraction-techniques-for-medical-images-by-using-scale-invariant-feature-transform/ramar-s
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Integration of poses to enhance the shape of the object tracking from a singl...eSAT Journals
Abstract In computer vision, tracking human pose has received a growing attention in recent years. The existing methods used multi-view videos and camera calibrations to enhance the shape of the object in 3D view. In this paper, tracking and partial reconstruction of the shape of the object from a single view video is identified. The goal of the proposed integrated method is to detect the movement of a person more accurately in 2D view. The integrated method is a combination of Silhouette based pose estimation and Scene flow based pose estimation. The silhouette based pose estimation is used to enhance the shape of the object for 3D reconstruction and scene flow based pose estimation is used to capture the size as well as the stability of the object. By integrating these two poses, the accurate shape of the object has been calculated from a single view video. Keywords: Pose Estimation, optical Flow, Silhouette, Object Reconstruction, 3D Objects
Performance Analysis of CRT for Image Encryption ijcisjournal
With the fast advancements of information technology, the security of image data transmitted or stored over
internet is become very difficult. To hide the details, an effective method is encryption, so that only
authorized persons can decrypt the image with the keys available. Since the default features of digital
image such as high capacity data, large redundancy and large similarities among pixels, the conventional
encryption algorithms such as AES, , DES, 3DES, and Blow Fish, are not applicable for real time image
encryption. This paper presents the performance of CRT for image encryption to secure storage and
transmission of image over internet.
Solving the Pose Ambiguity via a Simple Concentric Circle ConstraintDr. Amarjeet Singh
Estimating the pose of objects with circle feature from images is a basic and important question in computer vision
community. This paper is focused on the ambiguity problem in pose estimation of circle feature, and a new method is proposed based
on the concentric circle constraint. The pose of a single circle feature, in general, can be determined from its projection in the image
plane with a pre-calibrated camera. However, there are generally two possible sets of pose parameters. By introducing the concentric
circle constraint, interference from the false solution can be excluded. On the basis of element at infinity in projective geometry and
the Euclidean distance invariant, cases that concentric circles are coplanar and non-coplanar are discussed respectively. Experiments
on these two cases are performed to validate the proposed method.
Copy Move Forgery Detection Using GLCM Based Statistical Features ijcisjournal
The features Gray Level Co-occurrence Matrix (GLCM) are mostly explored in Face Recognition and
CBIR. GLCM technique is explored here for Copy-Move Forgery Detection. GLCMs are extracted from all
the images in the database and statistics such as contrast, correlation, homogeneity and energy are
derived. These statistics form the feature vector. Support Vector Machine (SVM) is trained on all these
features and the authenticity of the image is decided by SVM classifier. The proposed work is evaluated on
CoMoFoD database, on a whole 1200 forged and processed images are tested. The performance analysis
of the present work is evaluated with the recent methods.
Tissue Segmentation Methods using 2D Hiistogram Matching in a Sequence of MR ...Vladimir Kanchev
Methodology of the suggested method for tissue segmentation in MR brain images using 2D histogram matching. Each algorithmic step is given in detail and analyzed.
Crest lines convey the inherent features of the shape. Mathematically Crest lines are described via extremes of the surface principal curvatures along their corresponding lines of curvature. In this study we used an automatic threshold estimation technique to estimate crest lines. We firstly computed the principal curvature and corresponding direction for each vertex in the mesh; then we computed the saliency value by a linear combination of the maximal absolute curvature and the absolute curvature difference; finally, we automatically determine the threshold to detect the crest lines according to the saliency value. For illustrative purpose, we demonstrated our method with several examples.
IMPROVEMENTS OF THE ANALYSIS OF HUMAN ACTIVITY USING ACCELERATION RECORD OF E...sipij
The use of Holter Electrocardiograph (Holter ECG) is rapidly spreading. It is a wearableelectrocardiograph that records 24-hour electrocardiograms in a built-in flash memory, making it possibleto detect atrial fibrillation (Atrial Fibrillation, AF) through all-day activities. It is also useful for screeningfor diseases other than atrial fibrillation and for improving health. It is said that more useful informationcan be obtained by combining electrocardiograph with the analysis of physical activity. For that purpose,the Holter electrocardiograph is equipped with heart rate sensor and acceleration sensors. If accelerationdata is analysed, we can estimate activities in daily life, such as getting up, eating, walking, usingtransportation, and sitting. In combination with such activity status, electrocardiographic data can be expected to be more useful.
ESTIMATING THE CREST LINES ON POLYGONAL MESH MODELS BY AN AUTOMATIC THRESHOLDijcsit
Crest lines convey the inherent features of the shape. Mathematically Crest lines are described via extremes of the surface principal curvatures along their corresponding lines of curvature. In this study we used an automatic threshold estimation technique to estimate crest lines. We firstly computed the principal curvature and corresponding direction for each vertex in the mesh; then we computed the saliency value by a linear combination of the maximal absolute curvature and the absolute curvature difference; finally, we automatically determine the threshold to detect the crest lines according to the saliency value. For illustrative purpose, we demonstrated our method with several examples.
A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...csandit
3-dimensional object modelling of real world objects in steady state by means of multiple point
cloud (pcl) depth scans taken by using sensing camera and application of smoothing algorithm
are suggested in this study. Polygon structure, which is constituted by coordinates of point
cloud (x,y,z) corresponding to the position of 3D model in space and obtained by nodal points
and connection of these points by means of triangulation, is utilized for the demonstration of 3D
models. Gaussian smoothing and developed methods are applied to the mesh consisting of
merge of these polygons, and a new mesh simplification and augmentation algorithm are
suggested for the over the 3D modelling. Mesh consisting of merge of polygons can be
demonstrated in a more packed, smooth and fluent way. In this study is shown that applied the
triangulation and smoothing method for 3D modelling, perform to a fast and robust mesh
structures compared to existing methods therewithal no remeshing is necessary for refinement
and reduction.
Automatic rectification of perspective distortion from a single image using p...ijcsa
Perspective distortion occurs due to the perspective projection of 3D scene on a 2D surface. Correcting the distortion of a single image without losing any desired information is one of the challenging task in the field of Computer Vision. We consider the problem of estimating perspective distortion from a single still image of an unstructured environment and to make perspective correction which is both quantitatively accurate as well as visually pleasing. Corners are detected based on the orientation of the image. A method based on plane homography and transformation is used to make perspective correction. The algorithm infers frontier information directly from the images, without any reference objects or prior knowledge of the camera parameters. The frontiers are detected using geometric context based segmentation. The goal of this paper is to present a framework providing fully automatic and fast perspective correction.
For the processing of data such as with 3D printing, Virtual Reality (VR) and
Augmented Reality (AR), there is a need to seek technology which accurately and quickly analyzes the
three-dimensional structures including that of complicated 3D forms. However, unlike in 2D situations
when there are few data points, there is not yet an established method for processing it quickly for 3D
forms due to the fact the objects constructing it are complicated as well as the fact that there is a lot of
data points within the space. Generally, when illustrating a complicated form, a method is used
whereby an object with the complicated form is generated using several primitive shapes. This method
is used in various 3D modelling software because the position of the object can be intuitively and
freely changed and since it can be easily written within DirectX or Java 3D, OpenGL, etc. In this
thesis, it was shown that by using GPGPU (General-Purpose computing on Graphics Processing
Units) in respect of an algorithm with a solid angle, the inside-outside judgement could be conducted
quickly. Specifically, a measurement of inside-outside judgement processing was made for
complicated shapes created from several primitive shapes as well as the measurement of processing
time of several primitive shapes.
An iterative morphological decomposition algorithm for reduction of skeleton ...ijcsit
Shape representation is an important aspect in image processing and computer vision. There are several skeleton transforms that lead to morphological shape representation algorithm. One of the main problems with these algorithms is in selecting the skeleton points that represent the shape component. If the numbers of skeleton subsets are reduced then the reconstruction process will be easy and time consuming. The present paper proposes a skeleton scheme that selects skeleton points based on the largest shape element. By this, overall skeleton subsets will be reduced. The present method is applied on various images and is compared with generalized skeleton transform and octagon-generating decomposition algorithm.
Enhanced target tracking based on mean shift algorithm for satellite imageryeSAT Journals
Abstract Target tracking in high resolution satellite images is challenging task for computer vision field. In this paper we have proposed a mean shift algorithm based enhanced target tracking system for high resolution satellite imagery. In proposed tracking algorithm, Target modeling is done using spectral features of target object i.e. Mean & Energy density function. Feature Vector space with minimum Euclidean Distance is used for predicting next possible position of target object in consecutive frames. Proposed tracking algorithm has been tested using two high resolution databases i.e. Harbor & Airport region database acquired by WorldView-2 satellite at different times. Recall, Precision & F1 score etc. performance parameters are also calculated for showing the tracking ability of the proposed method in real-time applications and are compared with the results of Regional Operator Design based tracking algorithm proposed in [1]. The results show that our proposed method gives relatively better performance than the other tracking algorithms used in satellite imagery. Keywords- Target tracking, Mean shift algorithm, Energy density function, Feature Vector Space, Frame
Scalable and efficient cluster based framework for multidimensional indexingeSAT Journals
Abstract Indexing high dimensional data has its utility in many real world applications. Especially the information retrieval process is dramatically improved. The existing techniques could overcome the problem of “Curse of Dimensionality” of high dimensional data sets by using a technique known as Vector Approximation-File which resulted in sub-optimal performance. When compared with VA-File clustering results in more compact data set as it uses inter-dimensional correlations. However, pruning of unwanted clusters is important. The existing pruning techniques are based on bounding rectangles, bounding hyper spheres have problems in NN search. To overcome this problem Ramaswamy and Rose proposed an approach known as adaptive cluster distance bounding for high dimensional indexing which also includes an efficient spatial filtering. In this paper we implement this high-dimensional indexing approach. We built a prototype application to for proof of concept. Experimental results are encouraging and the prototype can be used in real time applications. Index Terms–Clustering, high dimensional indexing, similarity measures, and multimedia databases
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Application of Multiple Kernel Support Vector Regression for Weld Bead Geomet...IJECEIAES
Modelling and prediction of weld bead geometry is an important issue in robotic GMAW process. This process is highly non-linear and coupled multivariable system and the relationship between process parameters and weld bead geometry cannot be defined by an explicit mathematical expression. Therefore, application of supervised learning algorithms can be useful for this purpose. Support vector machine is a very successful approach to supervised learning. In this approach, a higher degree of accuracy and generalization capability can be obtained by using the multiple kernel learning framework, which is considered as a great advantage in prediction of weld bead geometry due to the high degree of prediction accuracy required. In this paper, a novel approach for modelling and prediction of the weld bead geometry, based on multiple kernel support vector regression analysis has been proposed, which benefits from a high degree of accuracy and generalization capability. This model can be used for proper selection of welding parameters in order to obtain a desired weld bead geometry in robotic GMAW process.
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
Linearity of Feature Extraction Techniques for Medical Images by using Scale ...ijtsrd
In Machine Learning, Pattern Recognition and in the field of image processing, Feature Extraction starts from an initial set of the measured data. Builds derived values are intended to be informative and non redundant, facilating the subsequent learning and in some cases leading to the better human interpretations. Feature Extraction is a dimensionally reduction process, where an initial set of raw variables has been reduced to more manageable groups. Many data analysis software packages provide for feature extraction and for dimension reduction. Determining a subset of the initial features is also known as feature extraction. Common Numerical programming environments are MATLAB, SciLab, NumPy, etc. Ramar S | Keerthiswaran V | Karthik Raj S S "Linearity of Feature Extraction Techniques for Medical Images by using Scale Invariant Feature Transform" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30358.pdf Paper Url :https://www.ijtsrd.com/engineering/bio-mechanicaland-biomedical-engineering/30358/linearity-of-feature-extraction-techniques-for-medical-images-by-using-scale-invariant-feature-transform/ramar-s
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Integration of poses to enhance the shape of the object tracking from a singl...eSAT Journals
Abstract In computer vision, tracking human pose has received a growing attention in recent years. The existing methods used multi-view videos and camera calibrations to enhance the shape of the object in 3D view. In this paper, tracking and partial reconstruction of the shape of the object from a single view video is identified. The goal of the proposed integrated method is to detect the movement of a person more accurately in 2D view. The integrated method is a combination of Silhouette based pose estimation and Scene flow based pose estimation. The silhouette based pose estimation is used to enhance the shape of the object for 3D reconstruction and scene flow based pose estimation is used to capture the size as well as the stability of the object. By integrating these two poses, the accurate shape of the object has been calculated from a single view video. Keywords: Pose Estimation, optical Flow, Silhouette, Object Reconstruction, 3D Objects
Performance Analysis of CRT for Image Encryption ijcisjournal
With the fast advancements of information technology, the security of image data transmitted or stored over
internet is become very difficult. To hide the details, an effective method is encryption, so that only
authorized persons can decrypt the image with the keys available. Since the default features of digital
image such as high capacity data, large redundancy and large similarities among pixels, the conventional
encryption algorithms such as AES, , DES, 3DES, and Blow Fish, are not applicable for real time image
encryption. This paper presents the performance of CRT for image encryption to secure storage and
transmission of image over internet.
Solving the Pose Ambiguity via a Simple Concentric Circle ConstraintDr. Amarjeet Singh
Estimating the pose of objects with circle feature from images is a basic and important question in computer vision
community. This paper is focused on the ambiguity problem in pose estimation of circle feature, and a new method is proposed based
on the concentric circle constraint. The pose of a single circle feature, in general, can be determined from its projection in the image
plane with a pre-calibrated camera. However, there are generally two possible sets of pose parameters. By introducing the concentric
circle constraint, interference from the false solution can be excluded. On the basis of element at infinity in projective geometry and
the Euclidean distance invariant, cases that concentric circles are coplanar and non-coplanar are discussed respectively. Experiments
on these two cases are performed to validate the proposed method.
Copy Move Forgery Detection Using GLCM Based Statistical Features ijcisjournal
The features Gray Level Co-occurrence Matrix (GLCM) are mostly explored in Face Recognition and
CBIR. GLCM technique is explored here for Copy-Move Forgery Detection. GLCMs are extracted from all
the images in the database and statistics such as contrast, correlation, homogeneity and energy are
derived. These statistics form the feature vector. Support Vector Machine (SVM) is trained on all these
features and the authenticity of the image is decided by SVM classifier. The proposed work is evaluated on
CoMoFoD database, on a whole 1200 forged and processed images are tested. The performance analysis
of the present work is evaluated with the recent methods.
Tissue Segmentation Methods using 2D Hiistogram Matching in a Sequence of MR ...Vladimir Kanchev
Methodology of the suggested method for tissue segmentation in MR brain images using 2D histogram matching. Each algorithmic step is given in detail and analyzed.
Crest lines convey the inherent features of the shape. Mathematically Crest lines are described via extremes of the surface principal curvatures along their corresponding lines of curvature. In this study we used an automatic threshold estimation technique to estimate crest lines. We firstly computed the principal curvature and corresponding direction for each vertex in the mesh; then we computed the saliency value by a linear combination of the maximal absolute curvature and the absolute curvature difference; finally, we automatically determine the threshold to detect the crest lines according to the saliency value. For illustrative purpose, we demonstrated our method with several examples.
IMPROVEMENTS OF THE ANALYSIS OF HUMAN ACTIVITY USING ACCELERATION RECORD OF E...sipij
The use of Holter Electrocardiograph (Holter ECG) is rapidly spreading. It is a wearableelectrocardiograph that records 24-hour electrocardiograms in a built-in flash memory, making it possibleto detect atrial fibrillation (Atrial Fibrillation, AF) through all-day activities. It is also useful for screeningfor diseases other than atrial fibrillation and for improving health. It is said that more useful informationcan be obtained by combining electrocardiograph with the analysis of physical activity. For that purpose,the Holter electrocardiograph is equipped with heart rate sensor and acceleration sensors. If accelerationdata is analysed, we can estimate activities in daily life, such as getting up, eating, walking, usingtransportation, and sitting. In combination with such activity status, electrocardiographic data can be expected to be more useful.
ESTIMATING THE CREST LINES ON POLYGONAL MESH MODELS BY AN AUTOMATIC THRESHOLDijcsit
Crest lines convey the inherent features of the shape. Mathematically Crest lines are described via extremes of the surface principal curvatures along their corresponding lines of curvature. In this study we used an automatic threshold estimation technique to estimate crest lines. We firstly computed the principal curvature and corresponding direction for each vertex in the mesh; then we computed the saliency value by a linear combination of the maximal absolute curvature and the absolute curvature difference; finally, we automatically determine the threshold to detect the crest lines according to the saliency value. For illustrative purpose, we demonstrated our method with several examples.
A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...csandit
3-dimensional object modelling of real world objects in steady state by means of multiple point
cloud (pcl) depth scans taken by using sensing camera and application of smoothing algorithm
are suggested in this study. Polygon structure, which is constituted by coordinates of point
cloud (x,y,z) corresponding to the position of 3D model in space and obtained by nodal points
and connection of these points by means of triangulation, is utilized for the demonstration of 3D
models. Gaussian smoothing and developed methods are applied to the mesh consisting of
merge of these polygons, and a new mesh simplification and augmentation algorithm are
suggested for the over the 3D modelling. Mesh consisting of merge of polygons can be
demonstrated in a more packed, smooth and fluent way. In this study is shown that applied the
triangulation and smoothing method for 3D modelling, perform to a fast and robust mesh
structures compared to existing methods therewithal no remeshing is necessary for refinement
and reduction.
A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...cscpconf
3-dimensional object modelling of real world objects in steady state by means of multiple point cloud (pcl) depth scans taken by using sensing camera and application of smoothing algorithm
are suggested in this study. Polygon structure, which is constituted by coordinates of point cloud (x,y,z) corresponding to the position of 3D model in space and obtained by nodal points and connection of these points by means of triangulation, is utilized for the demonstration of 3D models. Gaussian smoothing and developed methods are applied to the mesh consisting of merge of these polygons, and a new mesh simplification and augmentation algorithm are suggested for the over the 3D modelling. Mesh consisting of merge of polygons can be demonstrated in a more packed, smooth and fluent way. In this study is shown that applied the triangulation and smoothing method for 3D modelling, perform to a fast and robust mesh structures compared to existing methods therewithal no remeshing is necessary for refinement and reduction.
2D Shape Reconstruction Based on Combined Skeleton-Boundary FeaturesCSCJournals
Reconstructing a shape into meaningful representation plays a strong role in shape-related applications. It is motivated by recent studies in visual human perception discussing the importance of certain shape boundary features as well as features of the shape area; it utilizes certain properties of the shape skeleton based on symmetry axes combined with boundary features based on curvature to determine protrusion strength. The main contribution of this paper is the combination of skeleton and boundary information by deploying the symmetry –curvature duality method to simulate human perception based on results of research in visual perception. The experiments directly compare our algorithm with experiments on human subjects. They show that the proposed approach meets the human perceptual intuition. In comparison to existing methods, our method gives a perceptually more reasonable and stable result. Furthermore, the noisy shape reconstruction demonstrates the robustness of our method, experiments of different data sets prove the invariant representation of the combined skeleton-boundary approach.
VARIATIONAL MONTE-CARLO APPROACH FOR ARTICULATED OBJECT TRACKINGcsandit
In this paper, we describe a novel variational Monte Carlo approach for modeling and tracking
body parts of articulated objects. An articulated object (human target) is represented as a
dynamic Markov network of the different constituent parts. The proposed approach combines
local information of individual body parts and other spatial constraints influenced by
neighboring parts. The movement of the relative parts of the articulated body is modeled with
local information of displacements from the Markov network and the global information from
other neighboring parts. We explore the effect of certain model parameters (including the
number of parts tracked; number of Monte-Carlo cycles, etc.) on system accuracy and show that
ourvariational Monte Carlo approach achieves better efficiency and effectiveness compared to
other methods on a number of real-time video datasets containing single targets.
Classified 3d Model Retrieval Based on Cascaded Fusion of Local Descriptors ijcga
One of the core tasks in order to perform fast and accurate retrieval results in a content-based search and retrieval 3D system is to determine an efficient and effective method for matching similarities between the 3D models. In this paper the “cascaded fusion of local descriptors” is proposed for efficient retrieval of classified 3D models, based on a 2D coloured logo retrieval methodological approach, suitably modified for the purpose of 3D search and retrieval tasks that are widely used in the augmented reality (AR) and virtual reality (VR) fields. Initially, features from Key points are extracted using different state of the art local descriptor algorithms and then they are joined to constitute the feature tuple for the respective key point. Additionally, a feature vocabulary for each descriptor is created that maps those tuples to the respective vocabularies using distance functions that applied among the newly created tuples of each Point Cloud. Subsequently, an inverted index table is formed that maps the 3D models to each tuple respectively. Therefore, for every query 3D model only the corresponding 3D models are retrieved as these were previously mapped in the inverted index table. Finally, from the retrieved list by comparing the local features frequency of appearance to the first vocabulary, the final re ranked list of the most similar 3D models is produced.
VARIATIONAL MONTE-CARLO APPROACH FOR ARTICULATED OBJECT TRACKINGcscpconf
In this paper, we describe a novel variational Monte Carlo approach for modeling and tracking body parts of articulated objects. An articulated object (human target) is represented as a
dynamic Markov network of the different constituent parts. The proposed approach combines local information of individual body parts and other spatial constraints influenced by
neighboring parts. The movement of the relative parts of the articulated body is modeled with local information of displacements from the Markov network and the global information from other neighboring parts. We explore the effect of certain model parameters (including the number of parts tracked; number of Monte-Carlo cycles, etc.) on system accuracy and show that ourvariational Monte Carlo approach achieves better efficiency and effectiveness compared to
other methods on a number of real-time video datasets containing single targets
A Density Control Based Adaptive Hexahedral Mesh Generation Algorithmijeei-iaes
A density control based adaptive hexahedral mesh generation algorithm for three dimensional models is presented in this paper. The first step of this algorithm is to identify the characteristic boundary of the solid model which needs to be meshed. Secondly, the refinement fields are constructed and modified according to the conformal refinement templates, and used as a metric to generate an initial grid structure. Thirdly, a jagged core mesh is generated by removing all the elements in the exterior of the solid model. Fourthly, all of the surface nodes of the jagged core mesh are matching to the surfaces of the model through a node projection process. Finally, the mesh quality such as topology and shape is improved by using corresponding optimization techniques.
M ESH S IMPLIFICATION V IA A V OLUME C OST M EASUREijcga
We develop a polygonal mesh simplification algorithm based on a novel analysis of the mesh
geometry.
Particularly, we propose first a characterization of vertices as hyperbolic or non
-
hyperbolic depend
-
ing
upon their discrete local geometry. Subsequently, the simplification process computes a volume cost for
each non
-
hyperbolic vertex, in anal
-
ogy with spherical volume, to capture the loss of fidelity if that vertex
is decimated. Vertices of least volume cost are then successively deleted and the resulting holes re
-
triangulated using a method based on a novel heuristic. Preliminary experiments i
ndicate a performance
comparable to that of the best known mesh simplification algorithms
In our homes or offices, security has been a vital issue. Control of home security system remotely always offers huge advantages like the arming or disarming of the alarms, video monitoring, and energy management control apart from safeguarding the home free up intruders. Considering the oldest simple methods of security that is the mechanical lock system that has a key as the authentication element, then an upgrade to a universal type, and now unique codes for the lock. The recent advancement in the communication system has brought the tremendous application of communication gadgets into our various areas of life. This work is a real-time smart doorbell notification system for home Security as opposes of the traditional security methods, it is composed of the doorbell interfaced with GSM Module, a GSM module would be triggered to send an SMS to the house owner by pressing the doorbell, the owner will respond to the guest by pressing a button to open the door, otherwise, a message would be displayed to the guest for appropriate action. Then, the keypad is provided for an authorized person for the provision of password for door unlocking, if multiple wrong password attempts were made to unlock, a message of burglary attempt would be sent to the house owner for prompt action. The main benefit of this system is the uniqueness of the incorporation of the password and messaging systems which denies access to any unauthorized personality and owner's awareness method.
Augmented reality, the new age technology, has widespread applications in every field imaginable. This technology has proven to be an inflection point in numerous verticals, improving lives and improving performance. In this paper, we explore the various possible applications of Augmented Reality (AR) in the field of Medicine. The objective of using AR in medicine or generally in any field is the fact that, AR helps in motivating the user, making sessions interactive and assist in faster learning. In this paper, we discuss about the applicability of AR in the field of medical diagnosis. Augmented reality technology reinforces remote collaboration, allowing doctors to diagnose patients from a different locality. Additionally, we believe that a much more pronounced effect can be achieved by bringing together the cutting edge technology of AR and the lifesaving field of Medical sciences. AR is a mechanism that could be applied in the learning process too. Similarly, virtual reality could be used in the field where more of practical experience is needed such as driving, sports, neonatal care training.
Image fusion is a sub field of image processing in which more than one images are fused to create an image where all the objects are in focus. The process of image fusion is performed for multi-sensor and multi-focus images of the same scene. Multi-sensor images of the same scene are captured by different sensors whereas multi-focus images are captured by the same sensor. In multi-focus images, the objects in the scene which are closer to the camera are in focus and the farther objects get blurred. Contrary to it, when the farther objects are focused then closer objects get blurred in the image. To achieve an image where all the objects are in focus, the process of images fusion is performed either in spatial domain or in transformed domain. In recent times, the applications of image processing have grown immensely. Usually due to limited depth of field of optical lenses especially with greater focal length, it becomes impossible to obtain an image where all the objects are in focus. Thus, it plays an important role to perform other tasks of image processing such as image segmentation, edge detection, stereo matching and image enhancement. Hence, a novel feature-level multi-focus image fusion technique has been proposed which fuses multi-focus images. Thus, the results of extensive experimentation performed to highlight the efficiency and utility of the proposed technique is presented. The proposed work further explores comparison between fuzzy based image fusion and neuro fuzzy fusion technique along with quality evaluation indices.
Graphs have become the dominant life-form of many tasks as they advance a
structure to represent many tasks and the corresponding relations. A powerful
role of networks/graphs is to bridge local feats that exist in vertices as they
blossom into patterns that help explain how nodal relations and their edges
impacts a complex effect that ripple via a graph. User cluster are formed as a
result of interactions between entities. Many users can hardly categorize their
contact into groups today such as “family”, “friends”, “colleagues” etc. Thus,
the need to analyze such user social graph via implicit clusters, enables the
dynamism in contact management. Study seeks to implement this dynamism
via a comparative study of deep neural network and friend suggest algorithm.
We analyze a user’s implicit social graph and seek to automatically create
custom contact groups using metrics that classify such contacts based on a
user’s affinity to contacts. Experimental results demonstrate the importance
of both the implicit group relationships and the interaction-based affinity in
suggesting friends.
This paper projects Gryllidae Optimization Algorithm (GOA) has been applied to solve optimal reactive power problem. Proposed GOA approach is based on the chirping characteristics of Gryllidae. In common, male Gryllidae chirp, on the other hand some female Gryllidae also do as well. Male Gryllidae draw the females by this sound which they produce. Moreover, they caution the other Gryllidae against dangers with this sound. The hearing organs of the Gryllidae are housed in an expansion of their forelegs. Through this, they bias to the produced fluttering sounds. Proposed Gryllidae Optimization Algorithm (GOA) has been tested in standard IEEE 14, 30 bus test systems and simulation results show that the projected algorithms reduced the real power loss considerably.
In the wake of the sudden replacement of wood and kerosene by gas cookers for several purposes in Nigeria, gas leakage has caused several damages in our homes, Laboratories among others. installation of a gas leakage detection device was globally inspired to eliminate accidents related to gas leakage. We present an alternative approach to developing a device that can automatically detect and control gas leakages and also monitor temperature. The system detects the leakage of the LPG (Liquefied Petroleum Gas) using a gas sensor, then triggred the control system response which employs ventilator system, Mobile phone alert and alarm when the LPG concentration in the air exceeds a certain level. The performance of two gas sensors (MQ5 and MQ6) were tested for a guided decision. Also, when the temperature of the environment poses a danger, LED (indicator), buzzer and LCD (16x2) display was used to indicate temperature and gas leakage status in degree Celsius and PPM respectively. Attension was given to the response time of the control system, which was ascertained that this system significantly increases the chances and efficiency of eliminating gas leakage related accident.
Feature selection problem is one of the main important problems in the text and data mining domain. This paper presents a comparative study of feature selection methods for Arabic text classification. Five of the feature selection methods were selected: ICHI square, CHI square, Information Gain, Mutual Information and Wrapper. It was tested with five classification algorithms: Bayes Net, Naive Bayes, Random Forest, Decision Tree and Artificial Neural Networks. In addition, Data Collection was used in Arabic consisting of 9055 documents, which were compared by four criteria: Precision, Recall, F-measure and Time to build model. The results showed that the improved ICHI feature selection got almost all the best results in comparison with other methods.
In this paper Gentoo Penguin Algorithm (GPA) is proposed to solve optimal reactive power problem. Gentoo Penguins preliminary population possesses heat radiation and magnetizes each other by absorption coefficient. Gentoo Penguins will move towards further penguins which possesses low cost (elevated heat concentration) of absorption. Cost is defined by the heat concentration, distance. Gentoo Penguins penguin attraction value is calculated by the amount of heat prevailed between two Gentoo penguins. Gentoo Penguins heat radiation is measured as linear. Less heat is received in longer distance, in little distance, huge heat is received. Gentoo Penguin Algorithm has been tested in standard IEEE 57 bus test system and simulation results show the projected algorithm reduced the real power loss considerably.
08 20272 academic insight on applicationIAESIJEECS
This research has thrown up many questions in need of further investigation.There was an expressive quantitative-qualitative research, which a common investigation form was used in.The dialogue item was also applied to discover if the contributors asserted the media-based attitude supplements their learning of academic English writing classes or not.Data recounted academic” insights toward using Skype as a sustaining implement for lessons releasing based on chosen variables: their occupation, year of education, and knowledge with Skype discovered that there were no important statistical differences in the use of Skype units because of medical academics major knowledge. There are statistically important differences in using Skype units. The findings also, disclosed that there are statistically significant differences in using Skype units due to the practice with Skype variable, in favors of academics with no Skype use practice. Skype instrument as an instructive media is a positive medium to be employed to supply academic medical writing data and assist education. Academics who do not have enough time to contribute in classes believe comfortable using the Skype-based attitude in scientific writing. They who took part in the course claimed that their approval of this media is due to learning academic innovative medical writing.
Cloud computing has sweeping impact on the human productivity. Today it’s used for Computing, Storage, Predictions and Intelligent Decision Making, among others. Intelligent Decision-Making using Machine Learning has pushed for the Cloud Services to be even more fast, robust and accurate. Security remains one of the major concerns which affect the cloud computing growth however there exist various research challenges in cloud computing adoption such as lack of well managed service level agreement (SLA), frequent disconnections, resource scarcity, interoperability, privacy, and reliability. Tremendous amount of work still needs to be done to explore the security challenges arising due to widespread usage of cloud deployment using Containers. We also discuss Impact of Cloud Computing and Cloud Standards. Hence in this research paper, a detailed survey of cloud computing, concepts, architectural principles, key services, and implementation, design and deployment challenges of cloud computing are discussed in detail and important future research directions in the era of Machine Learning and Data Science have been identified.
Notary is an official authorized to make an authentic deed regarding all deeds, agreements and stipulations required by a general rule. Activities carried out at the notary office such as recording client data and file data still use traditional systems that tend to be manual. The problem that occurs is the inefficiency in data processing and providing information to clients. Clients have difficulty getting information related to the progress of documents that are being taken care of at the notary's office. The client must take the time to arrive to the notary's office repeatedly to check the progress of the work of the document file. The purpose of this study is to facilitate clients in obtaining information about the progress of the work in progress, and make it easier for employees to process incoming documents by implementing an administrative system. This system was developed with the waterfall system development method and uses the Multi-Channel Access Technology integrated in the website to simplify the process of delivering information and requesting information from clients and to clients with Telegram and SMS Gateway. Clients will come to the office only when there is a notification from the system via Telegram or SMS notifying that the client must come directly to the notary's office, thus leading to an efficient time and avoiding excessive transportation costs. The overall functional system can function properly based on the results of alpha testing. The results of beta testing conducted by distributing the system feasibility test questionnaire to end users, get a percentage of 96% of users agree the system is feasible to be implemented.
In this work Tundra wolf algorithm (TWA) is proposed to solve the optimal reactive power problem. In the projected Tundra wolf algorithm (TWA) in order to avoid the searching agents from trapping into the local optimal the converging towards global optimal is divided based on two different conditions. In the proposed Tundra wolf algorithm (TWA) omega tundra wolf has been taken as searching agent as an alternative of indebted to pursue the first three most excellent candidates. Escalating the searching agents’ numbers will perk up the exploration capability of the Tundra wolf wolves in an extensive range. Proposed Tundra wolf algorithm (TWA) has been tested in standard IEEE 14, 30 bus test systems and simulation results show the proposed algorithm reduced the real power loss effectively.
In this work Predestination of Particles Wavering Search (PPS) algorithm has been applied to solve optimal reactive power problem. PPS algorithm has been modeled based on the motion of the particles in the exploration space. Normally the movement of the particle is based on gradient and swarming motion. Particles are permitted to progress in steady velocity in gradient-based progress, but when the outcome is poor when compared to previous upshot, immediately particle rapidity will be upturned with semi of the magnitude and it will help to reach local optimal solution and it is expressed as wavering movement. In standard IEEE 14, 30, 57,118,300 bus systems Proposed Predestination of Particles Wavering Search (PPS) algorithm is evaluated and simulation results show the PPS reduced the power loss efficiently.
In this paper, Mine Blast Algorithm (MBA) has been intermingled with Harmony Search (HS) algorithm for solving optimal reactive power dispatch problem. MBA is based on explosion of landmines and HS is based on Creativeness progression of musicians-both are hybridized to solve the problem. In MBA Initial distance of shrapnel pieces are reduced gradually to allow the mine bombs search the probable global minimum location in order to amplify the global explore capability. Harmony search (HS) imitates the music creativity process where the musicians supervise their instruments’ pitch by searching for a best state of harmony. Hybridization of Mine Blast Algorithm with Harmony Search algorithm (MH) improves the search effectively in the solution space. Mine blast algorithm improves the exploration and harmony search algorithm augments the exploitation. At first the proposed algorithm starts with exploration & gradually it moves to the phase of exploitation. Proposed Hybridized Mine Blast Algorithm with Harmony Search algorithm (MH) has been tested on standard IEEE 14, 300 bus test systems. Real power loss has been reduced considerably by the proposed algorithm. Then Hybridized Mine Blast Algorithm with Harmony Search algorithm (MH) tested in IEEE 30, bus system (with considering voltage stability index)- real power loss minimization, voltage deviation minimization, and voltage stability index enhancement has been attained.
Artificial Neural Networks have proved their efficiency in a large number of research domains. In this paper, we have applied Artificial Neural Networks on Arabic text to prove correct language modeling, text generation, and missing text prediction. In one hand, we have adapted Recurrent Neural Networks architectures to model Arabic language in order to generate correct Arabic sequences. In the other hand, Convolutional Neural Networks have been parameterized, basing on some specific features of Arabic, to predict missing text in Arabic documents. We have demonstrated the power of our adapted models in generating and predicting correct Arabic text comparing to the standard model. The model had been trained and tested on known free Arabic datasets. Results have been promising with sufficient accuracy.
In the present-day communications speech signals get contaminated due to
various sorts of noises that degrade the speech quality and adversely impacts
speech recognition performance. To overcome these issues, a novel approach
for speech enhancement using Modified Wiener filtering is developed and
power spectrum computation is applied for degraded signal to obtain the
noise characteristics from a noisy spectrum. In next phase, MMSE technique
is applied where Gaussian distribution of each signal i.e. original and noisy
signal is analyzed. The Gaussian distribution provides spectrum estimation
and spectral coefficient parameters which can be used for probabilistic model
formulation. Moreover, a-priori-SNR computation is also incorporated for
coefficient updation and noise presence estimation which operates similar to
the conventional VAD. However, conventional VAD scheme is based on the
hard threshold which is not capable to derive satisfactory performance and a
soft-decision based threshold is developed for improving the performance of
speech enhancement. An extensive simulation study is carried out using
MATLAB simulation tool on NOIZEUS speech database and a comparative
study is presented where proposed approach is proved better in comparison
with existing technique.
Previous research work has highlighted that neuro-signals of Alzheimer’s disease patients are least complex and have low synchronization as compared to that of healthy and normal subjects. The changes in EEG signals of Alzheimer’s subjects start at early stage but are not clinically observed and detected. To detect these abnormalities, three synchrony measures and wavelet-based features have been computed and studied on experimental database. After computing these synchrony measures and wavelet features, it is observed that Phase Synchrony and Coherence based features are able to distinguish between Alzheimer’s disease patients and healthy subjects. Support Vector Machine classifier is used for classification giving 94% accuracy on experimental database used. Combining, these synchrony features and other such relevant features can yield a reliable system for diagnosing the Alzheimer’s disease.
Attenuation correction designed for PET/MR hybrid imaging frameworks along with portion making arrangements used for MR-based radiation treatment remain testing because of lacking high-energy photon weakening data. We present a new method so as to uses the learned nonlinear neighborhood descriptors also highlight coordinating toward foresee pseudo-CT pictures starting T1w along with T2w MRI information. The nonlinear neighborhood descriptors are acquired through anticipating the direct descriptors interested in the nonlinear high-dimensional space utilizing an unequivocal constituent guide also low-position guess through regulated complex regularization. The nearby neighbors of every near descriptor inside the data MR pictures are looked during an obliged spatial extent of the MR pictures among the training dataset. By that point, the pseudo-CT patches are evaluated through k-closest neighbor relapse. The planned procedure designed for pseudo-CT forecast is quantitatively broke downward on top of a dataset comprising of coordinated mind MRI along with CT pictures on or after 13 subjects.
The cognitive radio prototype performance is to alleviate the scarcity of spectral resources for wireless communication through intelligent sensing and quick resource allocation techniques. Secondary users (SU’s) actively obtain the spectrum access opportunity by supporting primary users (PU’s) in cognitive radio networks (CRNs). In present generation, spectrum access is endowed through cooperative communication-based link-level frame-based cooperative (LLC) principle. In this SUs independently act as conveyors for PUs to achieve spectrum access opportunities. Unfortunately, this LLC approach cannot fully exploit spectrum access opportunities to enhance the throughput of CRNs and fails to motivate PUs to join the spectrum sharing processes. Therefore, to overcome this con, network level cooperative (NLC) principle was used, where SUs are integrated mutually to collaborate with PUs session by session, instead of frame based cooperation for spectrum access opportunities. NLC approach has justified the challenges facing in LLC approach. In this paper we make a survey of some models that have been proposed to tackle the problem of LLC. We show the relevant aspects of each model, in order to characterize the parameters that we should take in account to achieve a spectrum access opportunity.
In this paper, the author provides insights and lessons that can be learned from colleagues at American universities about their online education experiences. The literature review and previous studies of online educations gains are explored and summarized in this research. Emerging trends in online education are discussed in detail, and strategies to implement these trends are explained. The author provides several tools and strategies that enable universities to ensure the quality of online education. At the end of this research paper, the researcher provides examples from Arab universities who have successfully implemented online education and expanded their impact on the society. This research provides a strategy and a model that can be used by universities in the Middle East as a roadmap to implement online education in their regions.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
2. ISSN: 2252-8776
IJ-ICT Vol. 8, No. 1, April 2019 : 1–12
2
the polygonal approximation is mostly not sufficient for biological shape description. Mitra A et al [11]
found that the algorithm based on mathematical morphology can localize the accurate skeleton, but may not
guarantee the exact connectivity of the skeleton. Yan T Q et al [12] found that the algorithm based on
distance transform compute the skeleton by detecting ridges on the distance transform surface, which ensures
the accurate localization of skeleton points. However, the algorithm based on distance transform is lack of
connectivity and completeness, which means, the extracted branches may be disconnected and not be able to
represent all the significant visual parts.
How to compare the skeleton similarity between the query sketch and 2D views of 3D model is
crucial. There are three types of algorithms to compare the skeleton similarity, respectively tree matching
algorithm [13], path similarity algorithm [14] and graph representation algorithm [15]. Dragan F F et al [16]
found that the tree descriptor represents the topological features of the skeleton, but the maximal isomorphic
subtrees are obtained by searching for the longest matching substrings which would lead to higher time
complexity. Bai X et al [14] matched the skeleton graphs by comparing the geodesic paths among skeleton
endpoints. However, their approach is only motivated by the visually similar skeleton graphs, without
considering the topological graph structure. Youssef R et al [17] found that the graph representation
algorithm is widely investigated regarding matching issues, due to the correspondence between skeleton
branches with graph edges and nodes is natural and intuitive. However, the solutions in the literature are
based on shock graphs [18] or attributed relational graphs [19] which are only used in certain application
domains and usually with higher time complexity.
To tackle these problems, this paper proposes a novel sketch-based 3D model retrieval approach
which combines the advanced skeleton strength map (ASSM) algorithm with histogram feature comparison
algorithm to create the skeleton and compare the similarity. Our ASSM algorithm origins from the algorithm
based on distance transform, which ensures the accurate localization of skeleton points. Further, our ASSM
algorithm selects critical points from the skeleton strength map and connects them by Kruskal's algorithm
[20], which solves the connectivity and completeness problem of previous algorithm. The histogram feature
comparison algorithm origins from the graph representation algorithm, which expresses the skeleton
branches naturally and intuitively [21]. Our histogram feature comparison algorithm adopts the radii of the
disks at skeleton points and the lengths of skeleton branches to extract the histogram feature. The radii of the
disks and lengths of skeleton branches are invariant under the environment of non-rigid transformations
which would help to get high-precision retrieval result. Additionally, we compare the similarity between two
skeletons using the histogram feature matrix of skeleton endpoints, with relatively less quantity than skeleton
points, which leads to a lower computational complexity. To evaluate our approach, we test our approach on
the public standard dataset and also compared with other leading 3D model retrieval approaches. The
experiments demonstrate that our approach is significantly better than several leading sketch-based retrieval
approaches.
2. RESEARCH METHOD
The framework of sketch-based 3D model retrieval based on advanced skeleton strength map and
histogram feature comparison algorithm is proposed as shown in Figure 1.
2D Query
Sketch Input
Start 3D Model Set
2D Projection
View of 3D Model
Histogram Feature
Extraction
Skeleton Distance
Computation
2D Projection View
Feature Set
Retrieval Results
End
Advanced Skeleton
Strength Map
Histogram Feature
Extraction
2D Sketch
Feature Set
Advanced Skeleton
Strength Map
Figure 1. The framework of our sketch-based 3D model retrieval
3. IJ-ICT ISSN: 2252-8776
A novel sketch-based 3D model retrieval approach based on skeleton (Jing Zhang)
3
2.1. Pre-processing
In the database each 3D model has an arbitrary position, orientation and scale in the spatial space, it
is necessary to normalize each 3D model before project them into 2D views. After 3D models in the database
have been normalized, we compare the query sketch with 60 projection views of each 3D model using light
field descriptors [22]. In addition, we utilize 2D sketch-3D model alignment algorithm to select some
candidate views from the 2D projection views for efficiently comparing the similarity.
Our light fields are distributed uniformly and positioned on vertices of a regular dodecahedron. For
regular dodecahedron, each of the 20 vertices is connected by 3 edges, which results in 60 different rotations
( 3edge20 svertices ) for our projections, so that we need rotate the camera system 60 times. When the
cameras switch onto different vertices [23] we measure the similarity between 3D model and sketch. Here is
a typical example to explain our approach. As shown in Figure 2, in the left is a pig 3D model, in the right
display 60 projection views which are rendered from vertices of a dodecahedron for the pig 3D model.
Figure 2. The light field descriptors of 3D model projection
To enhance the retrieval accuracy and performance, we adopt 2D sketch-3D model alignment
algorithm [24] to choose the candidate views in the sketch-based 3D model retrieval. We choose candidate
views [25] by keeping a certain percentage T with top similarities between the sketch and all the 2D
projection views, e.g. 20%T means that the number of our candidate views is60*20%=12.
2.2. Advanced skeleton strength map (ASSM)
The algorithm of skeleton extraction by ASSM is described as follows:
1. Step 1. Extraction of external boundary. For each query sketch and 2D views of 3D models, the external
boundary is extracted firstly, which provides much of the image visual information for the SSM value.
2. Step 2. Computation of SSM value [26]. We perform distance transform on boundary and then compute
the SSM value by isotropic diffusion on the gradient vector field.
3. Step 3. Refined SSM value. We adopt the non-maximal suppression algorithm [27] for the refined SSM
value.
4. Step 4. Selection of critical points. We select the critical points from the refined SSM value.
5. Step 5. Skeleton trace. The final skeleton is obtained by connecting the critical points. We propose a
connecting critical points’ method which uses the Kruskal's algorithm to decide the order of the
connecting path.
2.2.1. Computation of SSM value
We define a function ( )f r to get th9e accurate skeleton [26] :
( ) 1 ( ) ( )f r G r dt r (1)
where ( )G r represents a Gaussian kernel function, ( )dt r represents distance transform which is the
distance from an interior point r to the nearest boundary point, is its standard covariance, and is the
convolution operator.
We compute the gradient of ( )f r as follow:
0 0( , ) ( , )
f f
u v f
x y
(2)
4. ISSN: 2252-8776
IJ-ICT Vol. 8, No. 1, April 2019 : 1–12
4
The isotropic diffusion of ( )f r is performed:
2 2 2
2 2 2
( )( )
( )( )
x x y
y x y
du
u u u f f f
dt
dv
u u v f f f
dt
(3)
where ,u v are two components of the diffused gradient vector field. xf and yf are the two components of
( )f r . Initializing ,u v with 0 0,u v in (2), the partial differential (3) can be solved iteratively by finite
difference technique.
The isotropic diffusion makes the vectors of ( )f r propagate towards to the object centre, and the
intersections of vectors determine the actual location of the skeleton points.
Then, we consider the initial gradient vector field ( )gvf r :
( ) ( ( ) ( ))
r r
gvf r I r I r
r r
(4)
where ( )I r is the intensity value at r , and r represents one of the eight immediate neighbors of r .
The SSM value at each point indicates the probability of being a skeleton point. The higher value at
a point, the more probable this point is a skeleton point. We compute the SSM value by:
( ( ))
( ) ( )
( ) max(0, )
r N r
gvf r r r
SSM r
r r
(5)
where ( )N r is the set of the eight immediate neighbours of r .
2.2.2. Refined SSM value
When we have been computed the value of SSM, use non-maximal suppression [27] algorithm to
obtain SSM refined edge. The non-maximal suppression algorithm is described as follows:
Figure 3(a) shows the eight directions of point ( , )P x y within the 3×3 region. In Figure 3(b), the
quadrangle is formed by connecting the eight directions of point ( , )P x y . Then, we use the direction of
( )gvf P to make a straight line, which intersect with the quadrangle at point ( , )x y and ( , )x y .
If ( , ) ( , )& ( , ) ( , )SSM x y SSM x y SSM x y SSM x y , then the point ( , )P x y will be
retained. If not, the point ( , )P x y will be deleted. When all points have been utilized the non-maximal
suppression algorithm, we would get the refined SSM value.
3 2 1
4 P 8
765
(a)
P=(x,y)(x',y')
(x",y")
(b)
Figure 3. (a) The eight directions of point, (b) The quadrangle formed by connecting the eight
directions of point
5. IJ-ICT ISSN: 2252-8776
A novel sketch-based 3D model retrieval approach based on skeleton (Jing Zhang)
5
2.2.3. Selection of critical points
For the refined SSM value, we select the critical points with the lowest value of the gradient
magnitude ( ) ( )G r dt r .
Our critical points correspond to significant visual parts of the object. Therefore, the obtained
skeleton contains branches representing all significant visual parts. Notice that in the definition of the critical
point, we also include the endpoints, because those points usually do not have minimum gradient magnitude.
If they are not selected, the skeleton branches may be shortened.
2.2.4. Skeleton trace
We consider the critical points’ Euclidean distance matrix to represent an undirected weighted
graph. Each critical point chooses the top 3 minimum Euclidean distance with other points in the process of
Kruskal's algorithm [20]. The Kruskal's algorithm adds edges by weight ascending order, which forms a tree
that includes every vertex. The total weight of all the edges in the tree is minimized. Below are the steps of
connecting critical points using Kruskal’s algorithm.
Assume ( , )G V E represent graph which formed by critical points’ Euclidean distance. Where
V is the critical points set of the graph G , and E is the edge set of the critical points’ Euclidean distance.
We set the number of critical points is N , and sort all the edges by weight ascending order.
1. Step1. We define a set of N independent vertices. Since N critical points need be connected, we
consider the N critical points separately.
2. Step2. We choose the edge by weight ascending order. If the edge of two vertices satisfies in different
vertex sets, we add this edge to the minimum spanning tree’s edge set, and merge the two different
vertex sets into one vertex set. If not, we would consider the vertices of next edge.
3. Step3. Repeat step2 until all the vertices are in the same vertex set.
As shown in Figure 4 (a), it is an example to explain our approach which includes 7 critical points to
use the Kruskal’s algorithm. In Figure 4 (b), the Euclidean distance between 7 critical points represents an
undirected complete graph. In Figure 4(c), it’s the minimum spanning tree of 7 critical points which use the
Kruskal's algorithm to create.
A
B
C D
E
F
G
(a)
5 9
11
88
5
6
7
7
12
A
B
C D
E
F
G
12
9
10
(b)
5 9
5
6
7
7
A
B
C D
E
F
G
(c)
Figure 4. (a) An example including 7 critical points, (b) The Euclidean distance between 7 critical points,
(c) The minimum spanning tree of Kruskal's algorithm
2.3. Histogram Feature Comparison
Our histogram feature comparison algorithm consists of histogram feature extraction and skeleton
distance computation algorithm. The histogram feature extraction algorithm adopts the radii of the disks at
skeleton points and the lengths of skeleton branches to extract the histogram feature. The radii of the disks
and lengths of skeleton branches are invariant under the environment of non-rigid transformation [28] which
would help to get high-precision retrieval result. Additionally, the skeleton distance computation algorithm
compares the similarity between two skeletons using the histogram feature matrix of skeleton endpoints, with
relatively less quantity than skeleton points which leads to a lower computational complexity.
2.3.1. Histogram Feature Extraction
We define a 2D dataset {( , ) 1,2,..., }i iS a b i T , and the ( , )H S X represents the 2D histogram
matrix of S , where ={( , ) 0,1,..., ; 0,1,..., }i jX x y i m j n is the histogram parameter, with constraints
-1 -1,i i i ix x y y , 1[ , ]i ma x x , 1[ , ]j nb y y . Define the 2D histogram matrix ( , )H S X as follow:
6. ISSN: 2252-8776
IJ-ICT Vol. 8, No. 1, April 2019 : 1–12
6
11 12 1
21 22 2
1 2
( , )=
n
n
m m mn
h h h
h h h
H S X
h h h
(6)
where ijh represents the number of S dataset elements distributing in the rectangular area ( 1,i ix x ,
1,j jy y ), ,i jx y usually is arithmetic sequence or geometric sequence.
As shown in Figure 5, the point Q represents the skeleton centre which is the skeleton point with
radius of the maximal disk, and ( 1,2,.., )iv i T is the set of skeleton points. Then we use feature dataset QP
to replace the above 2D dataset S :
{ ( ) ( ) ( ( , ), ( ))}( 1,2,..., )Q Q i Q i i iP p v p v ske Q v R v i T (7)
where ( , )iske Q v is the shortest path from skeleton centre Q to skeleton point iv . ( )iR v represents the radii
of disks of skeleton point iv . iv is the corresponding point of iv at the contour. ( )Q ip v represents the relative
distance from skeleton centre Q to contour point iv.
Figure 5. The radius of the maximal disk and contour point
For the 2D histogram feature matrix ( , )QH P X , X is the corresponding histogram parameter.
{( , ) , }( 0,1,..., ; 0,1,..., )i
i j i jX x y x l y rj i m j n (8)
The determinations of parameters , , ,l r m n are under the constraint of radii of disks and the lengths
of skeleton branches.
As shown in Figure 6, we consider the endpoint A as the base research point, the definition of AP is
similar to QP . The ( , )AH P X represents the histogram feature matrix.
Figure 6. The shortest path of skeleton endpoints
7. IJ-ICT ISSN: 2252-8776
A novel sketch-based 3D model retrieval approach based on skeleton (Jing Zhang)
7
The Figure 7 shows the 2D histogram of skeleton endpoint A generated by the length of all curves
distributing in each rectangular area. The ( , )iske A v is the abscissa which represents the shortest path from
the endpoint A to skeleton point iv .The ( )iR v is the ordinate which represents the radius of disk of skeleton
point iv .
Figure 7. The 2D histogram of skeleton endpoint A
2.3.2. Skeleton Distance Computation
We define two skeletons 1 2=(v ,v ,...,v )mG and 1 2=(v ,v ,...,v )nG , where vi and v j
represent
endpoints from different skeletons. With the histogram feature matrix ( )iH k and ( )jH k of skeleton
endpoints vi and v j
, we get the distance between 2D histogram of two skeleton endpoints:
1
(v ,v ) ( ) ( )
mn
i j k i j
k
d w H k H k
(9)
where
1
( )+ ( )
cos( )
2=
( )+ ( )
cos( )
2
i j
k mn
i j
k
H k H k
Zw
H k H k
Z
(10)
2max( ( )+ ( ))i j
k
Z H k H k (11)
satisfying
( )+ ( )
1i jH k H k
Z
.
When we compare the similarity between two skeletons, choose one of the skeletons as a
benchmark, and compute all the distance with the other skeleton endpoints.
1 1 1 2 1
2 1 2 2 2
1 2
( , ) ( , ) ( , )
( , ) ( , ) ( , )
( , )
( , ) ( , ) ( , )
n
n
m m m n
d v v d v v d v v
d v v d v v d v v
D G G
d v v d v v d v v
(12)
8. ISSN: 2252-8776
IJ-ICT Vol. 8, No. 1, April 2019 : 1–12
8
In the all endpoints distance matrix ( , )D G G , we select the minimum value of each line
1 2min ( , ) min ( , ) min ( , ) ( 1,2,..., )j j m jd v v d v v d v v j n , , , to compute the distance between two
skeletons. Then the distance of two skeletons is expressed as:
1
dis( , ) min ( , ) , 1,2,...,
m
i j
j
i
G G d v v j n
(13)
3. RESULTS AND ANALYSIS
We implement our sketch-based 3D model retrieval method in C++ under Windows. As shown in
Figure 8, The left side of the interface is a canvas for sketching the model. The user can erase and modify if
they do not satisfy with the drawn sketch. The right side is displaying page for the retrieved 3D models with
a relevant JPEG image. The user can click the blue button to download the corresponding 3D model.
The system consists of off-line feature extraction and on-line retrieval processes. In the off-line
process, the features are extracted in a PC with a Pentium III 800MHz CPU and GeForce2 MX video card. In
the on-line process, the retrieval system consists by a PC with an Intel Xeon CPU E5520@2.27 GHz and
12.0 GB of RAM. Our sketch-based 3D model retrieval benchmark is built on the well-known National
Taiwan University(NTU) [29] database and the latest collection of human sketches.
Figure 8. Our sketch-based 3D model retrieval system
3.1. Skeleton Extraction by ASSM
The proposed ASSM method is able to compute skeleton branches in all significant visual parts.
Figure 9 shows leaf skeleton extraction process by ASSM approach. As shown in Figure 9(a), it’s the original
leaf image. In Figure 9(b), shows the computation of the SSM value after isotropic diffusion on the gradient
vector field. In Figure 9(c), the SSM value is refined by the non-maximal suppression algorithm. In Figure
9(d), the critical points set extracted from the SSM refined value. In Figure 9(e), shows the final skeleton
connected by Kruskal's algorithm.
(a) (b) (c) (d) (e)
Figure 9. Illustration of leaf skeleton extraction process by ASSM approach
(a) the original leaf image, (b) shows the SSM value, (c) the SSM value is refined by the non-maximal
suppression algorithm, (d) the critical point set extracted from (c), (e) the final skeleton connected by
Kruskal's algorithm
9. IJ-ICT ISSN: 2252-8776
A novel sketch-based 3D model retrieval approach based on skeleton (Jing Zhang)
9
3.2. Skeleton Matching Based on Histogram Feature
In order to show the effectiveness of the proposed histogram feature comparison algorithm, we
tested three skeleton matching experiments between sketch and 2D view of 3D model. Figure 10 shows the
correspondence between a cat and another deformed cat.It demonstrates that our histogram feature
comparison algorithm can realize the skeleton matching of deformed object. Figure 11 shows the
correspondence between the persons with different numbers of arms. It illustrates that our histogram feature
comparison algorithm works correctly if object parts are significantly altered. Figure 12 shows the
correspondence between an elephant and another elephant with a stick. It demonstrates that our matching
process has strong performance when the compared objects have the redundant parts.
Figure 10. The correspondence
between a cat and another
deformed cat
Figure 11. The correspondence
between the persons with
different numbers of arms
Figure 12. The correspondence
between an elephant and another
elephant with a stick
3.3. Comparison with other approaches
We considered classical Precision and Recall metrics averaged over the set of processed queries[30]
to measure the retrieval effectiveness. Recall measures the ability of the system to retrieve all models that are
relevant. Precision measures that the ability of the system to retrieve only models that are relevant. They are
defined as:
(14)
(15)
To have a comprehensive evaluation of our algorithm, we further provide the results for other
performance metrics including Nearest Neighbour (NN), First Tier (FT), Second Tier (ST), E-measure (E),
Discounted Cumulative Gain (DCG) and Average Precision (AP). The meaning of the above performance
metrics is as follows [30]. NN measures the percentage of the closest matches that are relevant models. FT
represents how much percentage of a class has been retrieved among the top C list, where C is the cardinality
of the relevant class of the query sketch. It defines as:
( 1)
relevant correctly retrieved
FT
top C retrieved
(16)
ST represents how much percentage of a class has been retrieved among the top 2( 1)C list,
where C has the same meaning with FT metric.
2( 1)
relevant correctly retrieved
ST
top C retrieved
(17)
E is used to measure the performance of the retrieval results with a fixed length, e.g. the first 32
models. It combines both the Precision P and Recall R:
relevant correctly retrieved
Recall
all relevant
relevant correctly retrieved
Precies
all retrieved
10. ISSN: 2252-8776
IJ-ICT Vol. 8, No. 1, April 2019 : 1–12
10
2
1 1
E
P R
(18)
DCG is defined as the summed weighted value related to the positions of the relevant models.
1
2 2log
P
k
k
w
DCG w
k
(19)
where kw denote weighted value of each retrieval result, and k denotes the index of retrieval result. P is the
number of retrieval result.
AP can be computed by counting the total area under the Precision-Recall curve. The higher
Precision-Recall curve would get a better AP value. We compare our approach with other four leading
sketch-based 3D model retrieval algorithms, which utilize skeleton characteristics as the features to describe
the object shape. Sundar H et al. [31] are the most representative in the field of skeleton based shape
matching and retrieval. They utilized the thinning algorithm and clustering algorithm which help lessen the
effect of many small perturbations on the surface and reduce the number of nodes necessary for skeletal
graph construction. Lei H et al. [32] used thinning algorithm on the silhouette images to extract the
corresponding skeletons and utilized skeleton pruning method. Lin S et al. [33] got the skeleton of 3D model
through skeleton extraction algorithm based on mesh simplification and mesh contraction. Sirin Y et al. [34]
started by drawing circles of increasing radius around skeletons, which lead to each skeleton corresponded to
the centre of a maximally inscribed circle.We illustrate 3D models of car, lamp, plane and chair in the Figure
13 and the average comparison result is shown in Table 1.
Figure 13. Examples of sketch-based retrieval results
Table 1. Metrics for the Performance Comparison between Our Approach and Other Approaches
Approaches NN FT ST E DCG AP
Our Approach 0.387 0.316 0.383 0.374 0.589 0.394
Lei H’s Approach 0.365 0.268 0.331 0.352 0.541 0.335
Sundar H’s Approach 0.347 0.251 0.327 0.321 0.533 0.326
Lin S’s Approach 0.285 0.236 0.294 0.268 0.491 0.278
Sirin Y’s Approach 0.211 0.208 0.243 0.202 0.466 0.204
It is obvious that our approach outperforms other leading sketch-based 3D model retrieval
approaches. The approaches of Sundar H et al [31] and Lei H et al [32] utilized the thinning algorithm which
is quite sensitive to noise and can achieve large amount of calculation. Lin S et al [33] used the front view of
11. IJ-ICT ISSN: 2252-8776
A novel sketch-based 3D model retrieval approach based on skeleton (Jing Zhang)
11
the skeleton to represent 3D models, in this way 3D models can be represented in 2D form, which is not
sufficient for 3D model description. In contrast, we compare the query sketch with 60 projection views of
each 3D model in the Sec.2.1. Sirin Y et al [34] computed the total number and ratio of pixels which help to
distinguish shapes with similar skeletons. Since it is necessary to normalize each 3D model with high
standard, the comparison algorithm based on the pixels is inaccuracy. Figure 14 shows the Precision-Recall
plots of our approach with other four leading sketch-based 3D model retrieval algorithms. The experiments
demonstrate that our approach is significantly better than any other retrieval techniques.
Figure 14. Overall retrieval performance comparison result
4. CONCLUSION
In this paper, we proposed a novel sketch-based 3D model retrieval approach which combines
ASSM and histogram feature comparison algorithms. Firstly, the ASSM algorithm is proposed for extracting
the skeleton of 2D query sketch and 2D view of 3D model. The ASSM algorithm computes the skeleton
strength map by isotropic diffusion on the gradient vector field, selects critical points from the skeleton
strength map and connects them by Kruskal's algorithm, which solves the connectivity and completeness
problem of previous algorithm. Then, the histogram feature comparison algorithm adopts the radii of the
disks at skeleton points and the lengths of skeleton branches to extract the histogram feature. The radii of the
disks and lengths of skeleton branches are invariant under the environment of non-rigid transformation which
would help to get high-precision retrieval result. Additionally, we compare the similarity between two
skeletons using the histogram feature matrix of skeleton endpoints, with relatively less quantity than skeleton
points, which leads to a lower computational complexity. Experiment results demonstrate that our approach
which combines these two algorithms significantly outperforms several leading sketch-based retrieval
approaches.
REFERENCES
[1] Wang, F., Lin, L. F. and Tang, M. A new sketch-based 3D model retrieval approach by using global and local
features. Graphical Models. 2014, 76(3):128-139.
[2] Shao T, Xu W, Yin K, et al. Discriminative Sketch‐based 3D Model Retrieval via Robust Shape Matching.
Computer Graphics Forum. 2011, 30(7):2011–2020.
[3] Tabedzki M, Saeed K, Szczepański A. A modified K3M thinning algorithm. International Journal of Applied
Mathematics & Computer Science. 2016, 26(2):439-450.
[4] Liu Y J, Chen Z, Tang K. Construction of Iso-Contours, Bisectors, and Voronoi Diagrams on Triangulated Surfaces.
IEEE Transactions on Pattern Analysis & Machine Intelligence.2011, 33(8):1502-17.
[5] Gorelick L, Galun M, Sharon E, et al. Shape representation and classification using the Poisson equation. IEEE
Transactions on Pattern Analysis & Machine Intelligence.2004, 2(12):1991-2005.
[6] Morales S, Naranjo V, Angulo J, et al. Determination of retinal network skeleton through mathematical morphology.
Signal Processing Conference IEEE.2014:1691-1695.
12. ISSN: 2252-8776
IJ-ICT Vol. 8, No. 1, April 2019 : 1–12
12
[7] Guo Y, Sengur A. A novel 3D skeleton algorithm based on neutrosophic cost function. Applied Soft Computing.
2015, 36(C):210-217.
[8] Bertrand G, Couprie M. Isthmus Based Parallel and Symmetric 3D Thinning Algorithms. Graphical Models.2015,
80(C):1-15.
[9] Giesen J, Miklos B, Pauly M. The medial axis of the union of inner Voronoi balls in the plane. Computational
Geometry.2012, 45(9):515-523.
[10] Krinidis S, Chatzis V. A Skeleton Family Generator via Physics-Based Deformable Models. Image Processing
IEEE Transactions on.2009, 18(1):1-11.
[11] Mitra A, Roy S, Setua S K. Shape analysis of decisive objects from an image using mathematical morphology.
Third International Conference on Computer, Communication, Control and Information Technology, IEEE.
2015:1-6.
[12] Yan T Q, Zhou C X. A Continuous Skeletonization Method Based on Distance Transform. Communications in
Computer & Information Science. 2012, 304:251-258.
[13] Liu J, Liu W, Wu C. Objects Similarity Measurement Based on Skeleton Tree Descriptor Matching. IEEE
International Conference on Computer-Aided Design and Computer Graphics. 2007:96-101.
[14] Bai X, Latecki L J. Path Similarity Skeleton Graph Matching. IEEE Transactions on Pattern Analysis & Machine
Intelligence. 2008, 30(7):1282-92.
[15] Qiao G, Zong G, Sun M, et al. Automatic neutrophil nucleus lobe counting based on graph representation of region
skeleton. Cytometry Part A the Journal of the International Society for Analytical Cytology. 2012, 81A (9):734-
742.
[16] Dragan F F. Tree-Like Structures in Graphs: A Metric Point of View. Graph-Theoretic Concepts in Computer
Science. Springer Berlin Heidelberg.2013:1-4.
[17] Youssef R, Kacem A, Sevestre-Ghalila S, et al. Graph Structuring of Skeleton Object for Its High-Level
Exploitation. Image Analysis and Recognition. Springer International Publishing. 2015:419-426.
[18] Pandit P M, Akojwar S G. Performance Evaluation Of Object Recognition Using Skeletal Shock Graph: Challenges
And Future Prospects. International Journal of Advances in Engineering & Technology.2012.
[19] Ruberto C D. Recognition of shapes by attributed skeletal graphs. Pattern Recognition. 2004, 37(1):21-31.
[20] Guohua Geng. Data structure .Beijing: Higher Education Press. 2011:241-242.
[21] Saavedra J M, Bustos B. An Improved Histogram of Edge Local Orientations for Sketch-Based Image
Retrieval.Pattern Recognition. Springer Berlin Heidelberg.2010:432-441.
[22] L. McMillan, G. Bishop. Plenoptic Modeling: An Image-Based Rendering System. Proc. of ACM
SIGGRAPH.1995: 39-46.
[23] D. Chen, X. Tian, Y. Shen, M. Ouhyoung. On visual similarity based 3D model retrieval, Computer Graphics
Forum.2003, 22 (3):223-232.
[24] Bo Li, Henry Johan. Sketch-based 3D model retrieval by incorporating 2D-3D alignment. Multimedia Tools &
Applications. 2013, 65(3):363-385.
[25] Huttenlocher D, Ullman S. Object recognition using alignment. In: Proc. of IEEE international conference on
computer vision (ICCV).1987: 102-111.
[26] Q Li, X Bai, W Liu. Skeletonization of gray-scale image from incomplete boundaries, In International Conference
on Image Processing 2008, 10:877–880.
[27] J. Canny, A computational approach to edge detection, IEEE Trans. PAMI 1986, 8(6): 679-698.
[28] H.C. LonguetHiggins. The symmetry groups of non-rigid molecules. Molecular Physics. 1963, 6(6):1079-1086.
[29] NTU 3D Model Database, ver.1 http://3d.csie.ntu.edu.tw/.
[30] Shilane P, Min P, Kazhdan M, Funkhouser T. The Princeton shape benchmark. In: Proc. of shape modelling and
applications, 2004:167-178.
[31] Sundar H, Silver D, Gagvani N, et al. Skeleton Based Shape Matching and Retrieval[C].Shape Modeling
International. IEEE, 2003:130.
[32] Lei H, Li Y, Chen H, et al. A novel sketch-based 3D model retrieval method by integrating skeleton graph and
contour feature[J]. Journal of Advanced Mechanical Design Systems & Manufacturing, 2015, 9(4).
[33] Lin S, Guo Y, Liang Y, et al. 3D Model retrieval based on skeleton[C].IEEE International Conference on
Networking, Architecture and Storage. IEEE, 2015:321-325.
[34] Sirin Y, Demirci M F. 2D and 3D shape retrieval using skeleton filling rate[J]. Multimedia Tools & Applications,
2016:1-26.