This document describes a memory-based object recognition method that combines an associative memory with a Hough-like evidence combination technique. This allows it to overcome limitations of traditional memory-based methods like sensitivity to clutter and occlusion. The method uses local, semi-invariant "keys" extracted from images to access an associative memory and retrieve potential object identities and configurations. These hypotheses are then used as keys in a second associative memory to accumulate evidence. Experiments were conducted using this two-stage method with polyhedral and curved objects, extracting either segment chains or curve patches as keys. The method was able to recognize objects over a full range of rotations in 3D space.
STUDY OF DISTANCE MEASUREMENT TECHNIQUES IN CONTEXT TO PREDICTION MODEL OF WE...ijscai
Internet is the boon in modern era as every organization uses it for dissemination of information and ecommerce
related applications. Sometimes people of organization feel delay while accessing internet in
spite of proper bandwidth. Prediction model of web caching and prefetching is an ideal solution of this
delay problem. Prediction model analysing history of internet user from server raw log files and determine
future sequence of web objects and placed all web objects to nearer to the user so access latency could be
reduced to some extent and problem of delay is to be solved. To determine sequence of future web objects,
it is necessary to determine proximity of one web object with other by identifying proper distance metric
technique related to web caching and prefetching. This paper studies different distance metric techniques
and concludes that bio informatics based distance metric techniques are ideal in context to Web Caching
and Web Prefetching
Integrated Hidden Markov Model and Kalman Filter for Online Object Trackingijsrd.com
Visual prior from generic real-world images study to represent that objects in a scene. The existing work presented online tracking algorithm to transfers visual prior learned offline for online object tracking. To learn complete dictionary to represent visual prior with collection of real world images. Prior knowledge of objects is generic and training image set does not contain any observation of target object. Transfer learned visual prior to construct object representation using Sparse coding and Multiscale max pooling. Linear classifier is learned online to distinguish target from background and also to identify target and background appearance variations over time. Tracking is carried out within Bayesian inference framework and learned classifier is used to construct observation model. Particle filter is used to estimate the tracking result sequentially however, unable to work efficiently in noisy scenes. Time sift variance were not appropriated to track target object with observer value to prior information of object structure. Proposal HMM based kalman filter to improve online target tracking in noisy sequential image frames. The covariance vector is measured to identify noisy scenes. Discrete time steps are evaluated for identifying target object with background separation. Experiment conducted on challenging sequences of scene. To evaluate the performance of object tracking algorithm in terms of tracking success rate, Centre location error, Number of scenes, Learning object sizes, and Latency for tracking.
A HYBRID FUZZY SYSTEM BASED COOPERATIVE SCALABLE AND SECURED LOCALIZATION SCH...ijwmn
Localization entails position estimation of sensor nodes by employing different techniques and mathematical computations. Localizable sensors also form an inherent part in the functioning of IoT devices and robotics. In this article, the author extends1 a novel scheme for node localization implemented using a hybrid fuzzy logic system to trace the node locations inside the deployment region, presented by the
Abhishek Kumar et. al. The results obtained were then optimized using Gauss Newton Optimization to improve the localization accuracy by 50% to 90% vis-à-vis weighted centroid and other fuzzy based localization algorithms. This article attempts to scale the proposed scheme for large number of sensor nodes to emulate somewhat real world scenario by introducing cooperative localization in previous presented work. The study also analyses the effectiveness of such scaling by comparing the localization accuracy. In next section, the article incorporates security in the proposed cooperative localization approach to detect malicious nodes/anchors by mutual authentication using El Gamel digital Signature scheme. A detailed study of the impact of incorporating security and scaling on average processing time and localization coverage has also been performed. The processing time increased by a factor of 2.5s for 500 nodes (can be attributed to more number of iterations and computations and large deployment area with small radio range of nodes) and coverage remained almost equal, albeit slightly low by a factor of 1% to 2%. Apart from these, the article also discusses the impact of adding extra functionalities in the proposed hybrid fuzzy system based localization scheme on processing time and localization accuracy.Lastly, this study also briefs about how the proposed scalable, cooperative and secure localization scheme tackles the type of attacks that pose threat to localization.
The document summarizes an adaptive image steganography technique that embeds secret messages into digital images. It proposes using adaptive quantization embedding, where quantization steps for image blocks are optimized to guarantee more data can be embedded in busy image areas with high contrast. The technique embeds adaptive quantization parameters and message bits into the cover image using a difference expanding algorithm. Simulation results showed the proposed scheme can provide a good balance between imperceptibility and embedding capacity.
LOCATION BASED DETECTION OF REPLICATION ATTACKS AND COLLUDING ATTACKSEditor IJCATR
Wireless sensor networks gains its importance because of the critical applications in which it is involved like
industrial automation, healthcare applications, military and surveillance. Among security attacks in wireless sensor
networks we consider an active attack, NODE REPLICATION attack and COLLUDING attack. We use localized
algorithms, ((ie) replication detection is done at the node level and eliminated without the intervention of the base
station) to solve replication attacks and colluding attacks. Replication attacks are detected to using a unique key pair
and cryptographic hash function. We propose to use XED and EED algorithm[1] ( authenticates the node and tries to
reduce the replication) , with this using the Event detected location , non-beacon node is used to find the location of a
malicious node and by a simple threshold verification we identify malicious clusters
Privacy Preserving MFI Based Similarity Measure For Hierarchical Document Clu...IJORCS
The document proposes a privacy-preserving approach for hierarchical document clustering using maximal frequent item sets (MFI). First, MFI are identified from document collections using the Apriori algorithm to define clusters precisely. Then, the same MFI-based similarity measure is used to construct a hierarchy of clusters. This approach decreases dimensionality and avoids duplicate documents, thereby protecting individual copyrights. The methodology and algorithm are described in detail.
A NOVEL APPROACH FOR CONCEALED DATA SHARING AND DATA EMBEDDING FOR SECURED CO...IJCSEA Journal
This paper introduces a new method of securing image using cryptographic and steganographic techniques. The science of securing a data by encryption is Cryptography whereas the method of hiding secret messages in other essages is Steganography, so that the secret’s very existence is concealed. The term ‘Steganography’ describes the method of hiding cognitive content in another medium to avoid detection by the intruders. The proposed method uses cryptographic and steganographic techniques to encrypt the data as well as hide the encrypted data in another medium so the fact, that a message being sent is concealed. The image is concealed by converting it into a iphertext using SDES algorithm with a secret key,which is also an image, and sent to the receiving end securely.
STUDY OF DISTANCE MEASUREMENT TECHNIQUES IN CONTEXT TO PREDICTION MODEL OF WE...ijscai
Internet is the boon in modern era as every organization uses it for dissemination of information and ecommerce
related applications. Sometimes people of organization feel delay while accessing internet in
spite of proper bandwidth. Prediction model of web caching and prefetching is an ideal solution of this
delay problem. Prediction model analysing history of internet user from server raw log files and determine
future sequence of web objects and placed all web objects to nearer to the user so access latency could be
reduced to some extent and problem of delay is to be solved. To determine sequence of future web objects,
it is necessary to determine proximity of one web object with other by identifying proper distance metric
technique related to web caching and prefetching. This paper studies different distance metric techniques
and concludes that bio informatics based distance metric techniques are ideal in context to Web Caching
and Web Prefetching
Integrated Hidden Markov Model and Kalman Filter for Online Object Trackingijsrd.com
Visual prior from generic real-world images study to represent that objects in a scene. The existing work presented online tracking algorithm to transfers visual prior learned offline for online object tracking. To learn complete dictionary to represent visual prior with collection of real world images. Prior knowledge of objects is generic and training image set does not contain any observation of target object. Transfer learned visual prior to construct object representation using Sparse coding and Multiscale max pooling. Linear classifier is learned online to distinguish target from background and also to identify target and background appearance variations over time. Tracking is carried out within Bayesian inference framework and learned classifier is used to construct observation model. Particle filter is used to estimate the tracking result sequentially however, unable to work efficiently in noisy scenes. Time sift variance were not appropriated to track target object with observer value to prior information of object structure. Proposal HMM based kalman filter to improve online target tracking in noisy sequential image frames. The covariance vector is measured to identify noisy scenes. Discrete time steps are evaluated for identifying target object with background separation. Experiment conducted on challenging sequences of scene. To evaluate the performance of object tracking algorithm in terms of tracking success rate, Centre location error, Number of scenes, Learning object sizes, and Latency for tracking.
A HYBRID FUZZY SYSTEM BASED COOPERATIVE SCALABLE AND SECURED LOCALIZATION SCH...ijwmn
Localization entails position estimation of sensor nodes by employing different techniques and mathematical computations. Localizable sensors also form an inherent part in the functioning of IoT devices and robotics. In this article, the author extends1 a novel scheme for node localization implemented using a hybrid fuzzy logic system to trace the node locations inside the deployment region, presented by the
Abhishek Kumar et. al. The results obtained were then optimized using Gauss Newton Optimization to improve the localization accuracy by 50% to 90% vis-à-vis weighted centroid and other fuzzy based localization algorithms. This article attempts to scale the proposed scheme for large number of sensor nodes to emulate somewhat real world scenario by introducing cooperative localization in previous presented work. The study also analyses the effectiveness of such scaling by comparing the localization accuracy. In next section, the article incorporates security in the proposed cooperative localization approach to detect malicious nodes/anchors by mutual authentication using El Gamel digital Signature scheme. A detailed study of the impact of incorporating security and scaling on average processing time and localization coverage has also been performed. The processing time increased by a factor of 2.5s for 500 nodes (can be attributed to more number of iterations and computations and large deployment area with small radio range of nodes) and coverage remained almost equal, albeit slightly low by a factor of 1% to 2%. Apart from these, the article also discusses the impact of adding extra functionalities in the proposed hybrid fuzzy system based localization scheme on processing time and localization accuracy.Lastly, this study also briefs about how the proposed scalable, cooperative and secure localization scheme tackles the type of attacks that pose threat to localization.
The document summarizes an adaptive image steganography technique that embeds secret messages into digital images. It proposes using adaptive quantization embedding, where quantization steps for image blocks are optimized to guarantee more data can be embedded in busy image areas with high contrast. The technique embeds adaptive quantization parameters and message bits into the cover image using a difference expanding algorithm. Simulation results showed the proposed scheme can provide a good balance between imperceptibility and embedding capacity.
LOCATION BASED DETECTION OF REPLICATION ATTACKS AND COLLUDING ATTACKSEditor IJCATR
Wireless sensor networks gains its importance because of the critical applications in which it is involved like
industrial automation, healthcare applications, military and surveillance. Among security attacks in wireless sensor
networks we consider an active attack, NODE REPLICATION attack and COLLUDING attack. We use localized
algorithms, ((ie) replication detection is done at the node level and eliminated without the intervention of the base
station) to solve replication attacks and colluding attacks. Replication attacks are detected to using a unique key pair
and cryptographic hash function. We propose to use XED and EED algorithm[1] ( authenticates the node and tries to
reduce the replication) , with this using the Event detected location , non-beacon node is used to find the location of a
malicious node and by a simple threshold verification we identify malicious clusters
Privacy Preserving MFI Based Similarity Measure For Hierarchical Document Clu...IJORCS
The document proposes a privacy-preserving approach for hierarchical document clustering using maximal frequent item sets (MFI). First, MFI are identified from document collections using the Apriori algorithm to define clusters precisely. Then, the same MFI-based similarity measure is used to construct a hierarchy of clusters. This approach decreases dimensionality and avoids duplicate documents, thereby protecting individual copyrights. The methodology and algorithm are described in detail.
A NOVEL APPROACH FOR CONCEALED DATA SHARING AND DATA EMBEDDING FOR SECURED CO...IJCSEA Journal
This paper introduces a new method of securing image using cryptographic and steganographic techniques. The science of securing a data by encryption is Cryptography whereas the method of hiding secret messages in other essages is Steganography, so that the secret’s very existence is concealed. The term ‘Steganography’ describes the method of hiding cognitive content in another medium to avoid detection by the intruders. The proposed method uses cryptographic and steganographic techniques to encrypt the data as well as hide the encrypted data in another medium so the fact, that a message being sent is concealed. The image is concealed by converting it into a iphertext using SDES algorithm with a secret key,which is also an image, and sent to the receiving end securely.
Abstract—Classical machine learning techniques have been employed severally in intrusion detection. But due to the rising cases and sophistication of attacks, more advanced machine learning techniques including ensemble-based methods, neural networks and deep learning techniques have been applied. However, there is still need for improved machine learning approach to detect attacks more effectively and efficiently. Stacked generalization approach has been shown to be capable of learning from features and meta-features but has been limited by the deficiencies of base classifiers and lack of optimization in the choice of meta-feature combination. This paper therefore proposes a stacked generalization ensemble approach based on two-tier meta-learner, in which the outputs of classical stacked ensemble are passed to multi-feature-based stacked ensemble, which is optimized. A Grid-search approach is used for the optimization. Nine data features and four meta-features derived from Logistic Regression, Support Vector Machine, Naïve Bayes, and Multilayer Perceptron neural network are used for the machine learning classification task. By applying neural networks as the meta-learner for the classification of NSL-KDD data, improved performances in terms of accuracy, precision, recall and F-measure of 0.97, 0.98, 0.98 and 0.98, respectively are achieved.
International Journal of Computer Science and Information Security,IJCSIS ISSN 1947-5500, Pittsburgh, PA, USA
Email: ijcsiseditor@gmail.com
http://sites.google.com/site/ijcsis/
https://google.academia.edu/JournalofComputerScience
https://www.linkedin.com/in/ijcsis-research-publications-8b916516/
http://www.researcherid.com/rid/E-1319-2016
Notes for Advanced Image Processing subject. This subject comes under Computer Science for B.E./B.Tech and M.E./M.Tech. students. Hope this will help you.
Nesting of five modulus method with improved lsb subtitution to hide an image...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Discovering latent semantics in web documentsshanofa sanu
We provide project guidance for final year MTech, BTech, MSc, MCA, ME, BE, BSc, BCA & Diploma students in Electronics, Computer Science, Information Technology, Instrumentation, Electrical & Electronics, Power electronics, Mechanical, Automobile etc. We provide live project assistance and will make the students involve throughout the project. We specialize in Matlab, VLSI, CST, JAVA, .NET, ANDROID, PHP, NS2, EMBEDDED, ARDUINO, ARM, DSP, etc based areas. We research in Image processing, Signal Processing, Wireless communication, Cloud computing, Data mining, Networking, Artificial Intelligence and several other areas. We provide complete support in project completion, documentation and other works related to project, java Projects in palakkad, IEEE project,web documents in palakkad,The relevance of a document belonging to a topic,our centers as a reliable, efficient, inexpensive and a fruitful learning experience,Dotnet project in palakkad, php Course and intenship in palakkad,technical documentation that we provide in digital format.
Call me at: 9037291113.
A Novel Classification via Clustering Method for Anomaly Based Network Intrus...IDES Editor
Intrusion detection in the internet is an active
area of research. Intruders can be classified into two
types, namely; external intruders who are unauthorized
users of the computers they attack, and internal
intruders, who have permission to access the system but
with some restrictions. The aim of this paper is to present
a methodology to recognize attacks during the normal
activities in a system. A novel classification via sequential
information bottleneck (sIB) clustering algorithm has
been proposed to build an efficient anomaly based
network intrusion detection model. We have compared
our proposed method with other clustering algorithms
like X-Means, Farthest First, Filtered clusters, DBSCAN,
K-Means, and EM (Expectation-Maximization)
clustering in order to find the suitability of our proposed
algorithm. A subset of KDDCup 1999 intrusion detection
benchmark dataset has been used for the experiment.
Results show that the proposed method is efficient in
terms of detection accuracy, low false positive rate in
comparison to the other existing methods.
This document proposes a method for annotating faces in images without supervision by mining the web. The method has two steps:
1. It ranks faces retrieved from a text-based search engine based on a local density score, which measures how similar a face is to its neighbors. Faces with higher scores are considered more relevant.
2. It then improves this ranking by modeling it as a classification problem, where faces are classified as the queried person or not. Multiple weak classifiers are trained on different subsets and combined via bagging to reduce noise from the unlabeled data. The faces are then re-ranked based on the classifier probabilities. Repeating this process iteratively improves the ranking.
AN EFFECTIVE SEMANTIC ENCRYPTED RELATIONAL DATA USING K-NN MODELijsptm
Data exchange and data publishing are becoming an important part of business and academic practices.
Data owners need to maintain the rights over the datasets they share. A right-protection mechanism can be
provided for the ownership of shared data, without revealing its usage under a wide range of machine
learning and mining. In the approach provide two algorithms: the Nearest-Neighbors (NN) and determiner
preserves the Minimum Spanning Tree (MST). The K-NN protocol guarantees that relations between object
remain unaltered. The algorithms preserve the both right protection and utility preservation. The rightprotection
scheme is based on watermarking. Watermarking methodology preserves the distance
relationships.
Text hiding in text using invisible character IJECEIAES
Steganography can be defined as the art and science of hiding information in the data that could be read by computer. This science cannot recognize stego-cover and the original one whether by eye or by computer when seeing the statistical samples. This paper presents a new method to hide text in text characters. The systematic method uses the structure of invisible character to hide and extract secret texts. The creation of secret message comprises four main stages such using the letter from the original message, selecting the suitable cover text, dividing the cover text into blocks, hiding the secret text using the invisible character and comparing the cover-text and stegoobject. This study uses an invisible character (white space) position of in the cover text that used to hide the the secrete sender masseges. The experiments results show that the suggested method presents highly secret due to use the multi-level of complexity to avoid the attackers.
A novel secure image steganography method based on chaos theory in spatial do...ijsptm
This paper presents a novel approach of building a secure data hiding technique in digital images. The
image steganography technique takes the advantage of limited power of human visual system (HVS). It uses
image as cover media for embedding secret message. The most important requirement for a steganographic
algorithm is to be imperceptible while maximizing the size of the payload. In this paper a method is
proposed to encrypt the secret bits of the message based on chaos theory before embedding into the cover
image. A 3-3-2 LSB insertion method has been used for image steganography. Experimental results show a
substantial improvement in the Peak Signal to Noise Ratio (PSNR) and Image Fidelity (IF) value of the
proposed technique over the base technique of 3-3-2 LSB insertion.
This document provides an overview of unsupervised machine learning techniques for clustering. It discusses different types of clustering including flat partitions, hierarchical trees, and hard vs soft memberships. Specific clustering algorithms are covered like K-means, hierarchical agglomerative clustering (HAC), DBSCAN, and graph-based clustering. Distance functions and linkage methods for HAC are also summarized. The document concludes with examples of applications for different clustering techniques.
IRJET- Study and Performance Evaluation of Different Symmetric Key Crypto...IRJET Journal
This document summarizes a study that evaluates the performance of four symmetric key cryptography algorithms: DES, 3DES, Blowfish, and AES. The study considers criteria like file size, file type, encryption and decryption time, and block size. It finds that Blowfish has the best performance, encrypting and decrypting data faster than the other algorithms. AES also performs well, while 3DES has the lowest performance due to its longer key length. The document reviews related literature comparing the performance of symmetric key cryptography algorithms and techniques that combine cryptography with steganography for enhanced security.
Cryptography using artificial neural networkMahira Banu
This document proposes using artificial neural networks for cryptography. It describes using a backpropagation neural network for decryption, where the network is trained on encrypted-decrypted message pairs. Boolean algebra is used for encryption, permuting messages and "doping" with additional bits. The neural network can then be used as a public key for decryption, with a private key for encryption. Simulation results showed the neural network approach weakened key guessing compared to other methods.
Two New Approaches for Secured Image Steganography Using Cryptographic Techni...sipij
The science of securing a data by encryption is Cryptography whereas the method of hiding secret messages in other messages is Steganography, so that the secret’s very existence is concealed. The term ‘Steganography’ describes the method of hiding cognitive content in another medium to avoid detection by the intruders. This paper introduces two new methods wherein cryptography and steganography are combined to encrypt the data as well as to hide the encrypted data in another medium so the fact that a message being sent is concealed. One of the methods shows how to secure the image by converting it into cipher text by S-DES algorithm using a secret key and conceal this text in another image by steganographic method. Another method shows a new way of hiding an image in another image by encrypting the image directly by S-DES algorithm using a key image and the data obtained is concealed in another image. The proposed method prevents the possibilities of steganalysis also.
TEXT STEGANOGRAPHY USING LSB INSERTION METHOD ALONG WITH CHAOS THEORYIJCSEA Journal
The art of information hiding has been around nearly as long as the need for covert communication. Steganography, the concealing of information, arose early on as an extremely useful method for covert information transmission. Steganography is the art of hiding secret message within a larger image or message such that the hidden message or an image is undetectable; this is in contrast to cryptography, where the existence of the message itself is not disguised, but the content is obscure. The goal of a steganographic method is to minimize the visually apparent and statistical differences between the cover data and a steganogram while maximizing the size of the payload. Current digital image steganography presents the challenge of hiding message in a digital image in a way that is robust to image manipulation and attack. This paper explains about how a secret message can be hidden into an image using least significant bit insertion method along with chaos.
Multiview Alignment Hashing for Efficient Image Search1crore projects
The document describes a new method called Multiview Alignment Hashing (MAH) for efficient image search. MAH aims to fuse multiple image feature representations while preserving the high-dimensional joint probability distribution of the data and obtaining orthogonal bases. It does this by formulating an objective function that is optimized using an alternate optimization procedure. This finds low-dimensional matrix factorizations via a technique called Regularized Kernel Nonnegative Matrix Factorization. After optimization, binary hash codes are obtained for images that can be used for efficient similarity search. The method is evaluated on several image datasets and is shown to outperform other state-of-the-art multiview hashing techniques.
This document proposes a linear recurrent convolutional neural network model for segment-based multiple object tracking in video. The model takes images as input and uses a CNN to classify superpixels, then performs segmentation and uses nonlinear NNs and a linear recurrent tracker layer to match segments over time. The objectives are to improve the tracker layer efficiency by modifying the matrix inverse and determine parameters for the model. Evaluation will use a dataset with ground truth segmentation and optical flow to train and compare to state-of-the-art methods.
INTELLIGENT INFORMATION RETRIEVAL WITHIN DIGITAL LIBRARY USING DOMAIN ONTOLOGYcscpconf
A digital library is a type of information retrieval (IR) system. The existing information retrieval
methodologies generally have problems on keyword-searching. We proposed a model to solve
the problem by using concept-based approach (ontology) and metadata case base. This model
consists of identifying domain concepts in user’s query and applying expansion to them. The
system aims at contributing to an improved relevance of results retrieved from digital libraries
by proposing a conceptual query expansion for intelligent concept-based retrieval. We need to
import the concept of ontology, making use of its advantage of abundant semantics and
standard concept. Domain specific ontology can be used to improve information retrieval from
traditional level based on keyword to the lay based on knowledge (or concept) and change the
process of retrieval from traditional keyword matching to semantics matching. One approach is
query expansion techniques using domain ontology and the other would be introducing a case
based similarity measure for metadata information retrieval using Case Based Reasoning
(CBR) approach. Results show improvements over classic method, query expansion using
general purpose ontology and a number of other approaches.
The document presents an overview of searching in metric spaces. It discusses how similarity searching is needed for unstructured data like text, images, and audio, where exact matching is not possible. It describes how similarity is modeled using a distance function between objects in a metric space. The document surveys existing solutions from different fields that address proximity searching in metric spaces and vector spaces. It aims to provide a unified framework to analyze and categorize existing algorithms.
This document summarizes a two-stage method for 3D object recognition using an associative memory. In the first stage, key features are used to access hypotheses for an object's identity and configuration from an associative memory. These hypotheses are then fed into a second-stage associative memory that accumulates evidence to estimate the likelihood of each hypothesis based on feature statistics in a database. The method is robust to occlusion and clutter since it relies on local features rather than global properties, and allows objects to be added automatically through visual exploration from different views.
A semantic framework and software design to enable the transparent integratio...Patricia Tavares Boralli
This document proposes a conceptual framework to unify representations of natural systems knowledge. The framework is based on separating the ontological nature of an object of study from the context of its observation. Each object is associated with a concept defined in an ontology and an observation context describing aspects like location and time. Models and data are treated as generic knowledge sources with a semantic type and observation context. This allows flexible integration and calculation of states across heterogeneous sources by composing their observation contexts and resolving semantic compatibility. The framework aims to simplify knowledge representation by abstracting away complexity related to data format and scale.
This document discusses clustering of uncertain data objects. It first provides background on clustering uncertain data and challenges in doing so. It then proposes combining k-means clustering with Voronoi diagrams to improve the performance of k-means when clustering uncertain data. Specifically, it suggests using k-means to generate clusters and Voronoi diagrams to answer nearest neighbor queries, in order to minimize computation time. Finally, it concludes that integrating clustering algorithms with indexing methods can effectively cluster uncertain data objects.
This document discusses clustering of uncertain data objects. It first provides background on clustering uncertain data and challenges involved. It then reviews various existing approaches for clustering uncertain data, including using soft classifiers and probabilistic databases. The document proposes combining k-means clustering with Voronoi diagrams and indexing techniques to improve the performance and efficiency of clustering uncertain datasets. It outlines a plan to integrate k-means with Voronoi diagrams and indexing to reduce execution time and increase clustering performance and results for uncertain data. Finally, it concludes that combining clustering with indexing approaches can better handle uncertain data clustering challenges.
Abstract—Classical machine learning techniques have been employed severally in intrusion detection. But due to the rising cases and sophistication of attacks, more advanced machine learning techniques including ensemble-based methods, neural networks and deep learning techniques have been applied. However, there is still need for improved machine learning approach to detect attacks more effectively and efficiently. Stacked generalization approach has been shown to be capable of learning from features and meta-features but has been limited by the deficiencies of base classifiers and lack of optimization in the choice of meta-feature combination. This paper therefore proposes a stacked generalization ensemble approach based on two-tier meta-learner, in which the outputs of classical stacked ensemble are passed to multi-feature-based stacked ensemble, which is optimized. A Grid-search approach is used for the optimization. Nine data features and four meta-features derived from Logistic Regression, Support Vector Machine, Naïve Bayes, and Multilayer Perceptron neural network are used for the machine learning classification task. By applying neural networks as the meta-learner for the classification of NSL-KDD data, improved performances in terms of accuracy, precision, recall and F-measure of 0.97, 0.98, 0.98 and 0.98, respectively are achieved.
International Journal of Computer Science and Information Security,IJCSIS ISSN 1947-5500, Pittsburgh, PA, USA
Email: ijcsiseditor@gmail.com
http://sites.google.com/site/ijcsis/
https://google.academia.edu/JournalofComputerScience
https://www.linkedin.com/in/ijcsis-research-publications-8b916516/
http://www.researcherid.com/rid/E-1319-2016
Notes for Advanced Image Processing subject. This subject comes under Computer Science for B.E./B.Tech and M.E./M.Tech. students. Hope this will help you.
Nesting of five modulus method with improved lsb subtitution to hide an image...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Discovering latent semantics in web documentsshanofa sanu
We provide project guidance for final year MTech, BTech, MSc, MCA, ME, BE, BSc, BCA & Diploma students in Electronics, Computer Science, Information Technology, Instrumentation, Electrical & Electronics, Power electronics, Mechanical, Automobile etc. We provide live project assistance and will make the students involve throughout the project. We specialize in Matlab, VLSI, CST, JAVA, .NET, ANDROID, PHP, NS2, EMBEDDED, ARDUINO, ARM, DSP, etc based areas. We research in Image processing, Signal Processing, Wireless communication, Cloud computing, Data mining, Networking, Artificial Intelligence and several other areas. We provide complete support in project completion, documentation and other works related to project, java Projects in palakkad, IEEE project,web documents in palakkad,The relevance of a document belonging to a topic,our centers as a reliable, efficient, inexpensive and a fruitful learning experience,Dotnet project in palakkad, php Course and intenship in palakkad,technical documentation that we provide in digital format.
Call me at: 9037291113.
A Novel Classification via Clustering Method for Anomaly Based Network Intrus...IDES Editor
Intrusion detection in the internet is an active
area of research. Intruders can be classified into two
types, namely; external intruders who are unauthorized
users of the computers they attack, and internal
intruders, who have permission to access the system but
with some restrictions. The aim of this paper is to present
a methodology to recognize attacks during the normal
activities in a system. A novel classification via sequential
information bottleneck (sIB) clustering algorithm has
been proposed to build an efficient anomaly based
network intrusion detection model. We have compared
our proposed method with other clustering algorithms
like X-Means, Farthest First, Filtered clusters, DBSCAN,
K-Means, and EM (Expectation-Maximization)
clustering in order to find the suitability of our proposed
algorithm. A subset of KDDCup 1999 intrusion detection
benchmark dataset has been used for the experiment.
Results show that the proposed method is efficient in
terms of detection accuracy, low false positive rate in
comparison to the other existing methods.
This document proposes a method for annotating faces in images without supervision by mining the web. The method has two steps:
1. It ranks faces retrieved from a text-based search engine based on a local density score, which measures how similar a face is to its neighbors. Faces with higher scores are considered more relevant.
2. It then improves this ranking by modeling it as a classification problem, where faces are classified as the queried person or not. Multiple weak classifiers are trained on different subsets and combined via bagging to reduce noise from the unlabeled data. The faces are then re-ranked based on the classifier probabilities. Repeating this process iteratively improves the ranking.
AN EFFECTIVE SEMANTIC ENCRYPTED RELATIONAL DATA USING K-NN MODELijsptm
Data exchange and data publishing are becoming an important part of business and academic practices.
Data owners need to maintain the rights over the datasets they share. A right-protection mechanism can be
provided for the ownership of shared data, without revealing its usage under a wide range of machine
learning and mining. In the approach provide two algorithms: the Nearest-Neighbors (NN) and determiner
preserves the Minimum Spanning Tree (MST). The K-NN protocol guarantees that relations between object
remain unaltered. The algorithms preserve the both right protection and utility preservation. The rightprotection
scheme is based on watermarking. Watermarking methodology preserves the distance
relationships.
Text hiding in text using invisible character IJECEIAES
Steganography can be defined as the art and science of hiding information in the data that could be read by computer. This science cannot recognize stego-cover and the original one whether by eye or by computer when seeing the statistical samples. This paper presents a new method to hide text in text characters. The systematic method uses the structure of invisible character to hide and extract secret texts. The creation of secret message comprises four main stages such using the letter from the original message, selecting the suitable cover text, dividing the cover text into blocks, hiding the secret text using the invisible character and comparing the cover-text and stegoobject. This study uses an invisible character (white space) position of in the cover text that used to hide the the secrete sender masseges. The experiments results show that the suggested method presents highly secret due to use the multi-level of complexity to avoid the attackers.
A novel secure image steganography method based on chaos theory in spatial do...ijsptm
This paper presents a novel approach of building a secure data hiding technique in digital images. The
image steganography technique takes the advantage of limited power of human visual system (HVS). It uses
image as cover media for embedding secret message. The most important requirement for a steganographic
algorithm is to be imperceptible while maximizing the size of the payload. In this paper a method is
proposed to encrypt the secret bits of the message based on chaos theory before embedding into the cover
image. A 3-3-2 LSB insertion method has been used for image steganography. Experimental results show a
substantial improvement in the Peak Signal to Noise Ratio (PSNR) and Image Fidelity (IF) value of the
proposed technique over the base technique of 3-3-2 LSB insertion.
This document provides an overview of unsupervised machine learning techniques for clustering. It discusses different types of clustering including flat partitions, hierarchical trees, and hard vs soft memberships. Specific clustering algorithms are covered like K-means, hierarchical agglomerative clustering (HAC), DBSCAN, and graph-based clustering. Distance functions and linkage methods for HAC are also summarized. The document concludes with examples of applications for different clustering techniques.
IRJET- Study and Performance Evaluation of Different Symmetric Key Crypto...IRJET Journal
This document summarizes a study that evaluates the performance of four symmetric key cryptography algorithms: DES, 3DES, Blowfish, and AES. The study considers criteria like file size, file type, encryption and decryption time, and block size. It finds that Blowfish has the best performance, encrypting and decrypting data faster than the other algorithms. AES also performs well, while 3DES has the lowest performance due to its longer key length. The document reviews related literature comparing the performance of symmetric key cryptography algorithms and techniques that combine cryptography with steganography for enhanced security.
Cryptography using artificial neural networkMahira Banu
This document proposes using artificial neural networks for cryptography. It describes using a backpropagation neural network for decryption, where the network is trained on encrypted-decrypted message pairs. Boolean algebra is used for encryption, permuting messages and "doping" with additional bits. The neural network can then be used as a public key for decryption, with a private key for encryption. Simulation results showed the neural network approach weakened key guessing compared to other methods.
Two New Approaches for Secured Image Steganography Using Cryptographic Techni...sipij
The science of securing a data by encryption is Cryptography whereas the method of hiding secret messages in other messages is Steganography, so that the secret’s very existence is concealed. The term ‘Steganography’ describes the method of hiding cognitive content in another medium to avoid detection by the intruders. This paper introduces two new methods wherein cryptography and steganography are combined to encrypt the data as well as to hide the encrypted data in another medium so the fact that a message being sent is concealed. One of the methods shows how to secure the image by converting it into cipher text by S-DES algorithm using a secret key and conceal this text in another image by steganographic method. Another method shows a new way of hiding an image in another image by encrypting the image directly by S-DES algorithm using a key image and the data obtained is concealed in another image. The proposed method prevents the possibilities of steganalysis also.
TEXT STEGANOGRAPHY USING LSB INSERTION METHOD ALONG WITH CHAOS THEORYIJCSEA Journal
The art of information hiding has been around nearly as long as the need for covert communication. Steganography, the concealing of information, arose early on as an extremely useful method for covert information transmission. Steganography is the art of hiding secret message within a larger image or message such that the hidden message or an image is undetectable; this is in contrast to cryptography, where the existence of the message itself is not disguised, but the content is obscure. The goal of a steganographic method is to minimize the visually apparent and statistical differences between the cover data and a steganogram while maximizing the size of the payload. Current digital image steganography presents the challenge of hiding message in a digital image in a way that is robust to image manipulation and attack. This paper explains about how a secret message can be hidden into an image using least significant bit insertion method along with chaos.
Multiview Alignment Hashing for Efficient Image Search1crore projects
The document describes a new method called Multiview Alignment Hashing (MAH) for efficient image search. MAH aims to fuse multiple image feature representations while preserving the high-dimensional joint probability distribution of the data and obtaining orthogonal bases. It does this by formulating an objective function that is optimized using an alternate optimization procedure. This finds low-dimensional matrix factorizations via a technique called Regularized Kernel Nonnegative Matrix Factorization. After optimization, binary hash codes are obtained for images that can be used for efficient similarity search. The method is evaluated on several image datasets and is shown to outperform other state-of-the-art multiview hashing techniques.
This document proposes a linear recurrent convolutional neural network model for segment-based multiple object tracking in video. The model takes images as input and uses a CNN to classify superpixels, then performs segmentation and uses nonlinear NNs and a linear recurrent tracker layer to match segments over time. The objectives are to improve the tracker layer efficiency by modifying the matrix inverse and determine parameters for the model. Evaluation will use a dataset with ground truth segmentation and optical flow to train and compare to state-of-the-art methods.
INTELLIGENT INFORMATION RETRIEVAL WITHIN DIGITAL LIBRARY USING DOMAIN ONTOLOGYcscpconf
A digital library is a type of information retrieval (IR) system. The existing information retrieval
methodologies generally have problems on keyword-searching. We proposed a model to solve
the problem by using concept-based approach (ontology) and metadata case base. This model
consists of identifying domain concepts in user’s query and applying expansion to them. The
system aims at contributing to an improved relevance of results retrieved from digital libraries
by proposing a conceptual query expansion for intelligent concept-based retrieval. We need to
import the concept of ontology, making use of its advantage of abundant semantics and
standard concept. Domain specific ontology can be used to improve information retrieval from
traditional level based on keyword to the lay based on knowledge (or concept) and change the
process of retrieval from traditional keyword matching to semantics matching. One approach is
query expansion techniques using domain ontology and the other would be introducing a case
based similarity measure for metadata information retrieval using Case Based Reasoning
(CBR) approach. Results show improvements over classic method, query expansion using
general purpose ontology and a number of other approaches.
The document presents an overview of searching in metric spaces. It discusses how similarity searching is needed for unstructured data like text, images, and audio, where exact matching is not possible. It describes how similarity is modeled using a distance function between objects in a metric space. The document surveys existing solutions from different fields that address proximity searching in metric spaces and vector spaces. It aims to provide a unified framework to analyze and categorize existing algorithms.
This document summarizes a two-stage method for 3D object recognition using an associative memory. In the first stage, key features are used to access hypotheses for an object's identity and configuration from an associative memory. These hypotheses are then fed into a second-stage associative memory that accumulates evidence to estimate the likelihood of each hypothesis based on feature statistics in a database. The method is robust to occlusion and clutter since it relies on local features rather than global properties, and allows objects to be added automatically through visual exploration from different views.
A semantic framework and software design to enable the transparent integratio...Patricia Tavares Boralli
This document proposes a conceptual framework to unify representations of natural systems knowledge. The framework is based on separating the ontological nature of an object of study from the context of its observation. Each object is associated with a concept defined in an ontology and an observation context describing aspects like location and time. Models and data are treated as generic knowledge sources with a semantic type and observation context. This allows flexible integration and calculation of states across heterogeneous sources by composing their observation contexts and resolving semantic compatibility. The framework aims to simplify knowledge representation by abstracting away complexity related to data format and scale.
This document discusses clustering of uncertain data objects. It first provides background on clustering uncertain data and challenges in doing so. It then proposes combining k-means clustering with Voronoi diagrams to improve the performance of k-means when clustering uncertain data. Specifically, it suggests using k-means to generate clusters and Voronoi diagrams to answer nearest neighbor queries, in order to minimize computation time. Finally, it concludes that integrating clustering algorithms with indexing methods can effectively cluster uncertain data objects.
This document discusses clustering of uncertain data objects. It first provides background on clustering uncertain data and challenges involved. It then reviews various existing approaches for clustering uncertain data, including using soft classifiers and probabilistic databases. The document proposes combining k-means clustering with Voronoi diagrams and indexing techniques to improve the performance and efficiency of clustering uncertain datasets. It outlines a plan to integrate k-means with Voronoi diagrams and indexing to reduce execution time and increase clustering performance and results for uncertain data. Finally, it concludes that combining clustering with indexing approaches can better handle uncertain data clustering challenges.
A Soft Set-based Co-occurrence for Clustering Web User TransactionsTELKOMNIKA JOURNAL
This document proposes a soft set-based approach for clustering web user transactions to achieve lower computational complexity and higher clustering purity compared to previous rough set approaches. Unlike rough set approaches that use similarity, the proposed approach uses a co-occurrence approach based on soft set theory. The soft set representation of web user transactions allows modeling as a binary-valued information system. The approach is evaluated in comparison to two previous rough set-based approaches, demonstrating better performance with over 100% lower computational complexity and higher cluster purity.
A Density Based Clustering Technique For Large Spatial Data Using Polygon App...IOSR Journals
This document presents a density-based clustering technique called TDCT (Triangle-density based clustering technique) for efficiently clustering large spatial datasets. The technique uses a polygon approach where the number of data points inside each triangle of a polygon is calculated. If the ratio of point densities between two neighboring triangles exceeds a threshold, the triangles are merged into the same cluster. The technique is capable of identifying clusters of arbitrary shapes and densities. Experimental results demonstrate the technique has superior cluster quality and complexity compared to other methods.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
This document discusses multidimensional clustering methods for data mining and their industrial applications. It begins with an introduction to clustering, including definitions and goals. Popular clustering algorithms are described, such as K-means, fuzzy C-means, hierarchical clustering, and mixture of Gaussians. Distance measures and their importance in clustering are covered. The K-means and fuzzy C-means algorithms are explained in detail. Examples are provided to illustrate fuzzy C-means clustering. Finally, applications of clustering algorithms in fields such as marketing, biology, and earth sciences are mentioned.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
This document discusses techniques for detecting duplicate records from multiple web databases. It begins with an abstract describing an unsupervised approach that uses classifiers like the weighted component similarity summing classifier and support vector machine along with a Gaussian mixture model to iteratively identify duplicate records. The document then provides details on related work, including probabilistic matching models, supervised and unsupervised learning techniques, distance-based techniques, rule-based approaches, and methods for improving efficiency like blocking and the sorted neighborhood approach.
Information extraction from sensor networks using the Watershed transform alg...M H
Wireless sensor networks are an effective tool to provide fine resolution monitoring of the physical environment. Sensors generate continuous streams of data, which leads to several computational challenges. As sensor nodes become increasingly active devices, with more processing and communication resources, various methods of distributed data processing and sharing become feasible. The challenge is to extract information from the gathered sensory data with a specified level of accuracy in a timely and power-efficient approach. This paper presents a new solution to distributed information extraction that makes use of the morphological Watershed algorithm. The Watershed algorithm dynamically groups sensor nodes into homogeneous network segments with respect to their topological relationships and their sensing-states. This setting allows network programmers to manipulate groups of spatially distributed data streams instead of individual nodes. This is achieved by using network segments as programming abstractions on which various query processes can be executed. Aiming at this purpose, we present a reformulation of the global Watershed algorithm. The modified Watershed algorithm is fully asynchronous, where sensor nodes can autonomously process their local data in parallel and in collaboration with neighbouring nodes. Experimental evaluation shows that the presented solution is able to considerably reduce query resolution cost without scarifying the quality of the returned results. When compared to similar purpose schemes, such as “Logical Neighborhood”, the proposed approach reduces the total query resolution overhead by up to 57.5%, reduces the number of nodes involved in query resolution by up to 59%, and reduces the setup convergence time by up to 65.1%.
This document summarizes a research paper that proposes a new density-based clustering technique called Triangle-Density Based Clustering Technique (TDCT) to efficiently cluster large spatial datasets. TDCT uses a polygon approach where the number of data points inside each triangle of a polygon is calculated to determine triangle densities. Triangle densities are used to identify clusters based on a density confidence threshold. The technique aims to identify clusters of arbitrary shapes and densities while minimizing computational costs. Experimental results demonstrate the technique's superiority in terms of cluster quality and complexity compared to other density-based clustering algorithms.
IEEE PROJECT TOPICS &ABSTRACTS on image processingaswin tbbc
The document describes a proposed approach called Multiview Alignment Hashing (MAH) for learning image hashing functions from multiple feature representations. Existing hashing methods rely on a single feature descriptor and spectral or graph-based techniques. MAH uses Nonnegative Matrix Factorization to combine multiple views, finding a low-dimensional representation that respects the joint probability distribution of data views while discarding redundancy. It formulates the problem as non-convex optimization and solves it through alternate optimization. Evaluation on image datasets shows MAH outperforms state-of-the-art multiview hashing techniques.
The Extraction of Spatial Features from remotely sensed data and the use of this information as input into further decision making systems such as geographical information systems (GIS) has received considerable attention over the few decades. The successful use of GIS as a decision support tool can only be achieved, if it becomes possible to attach a quality label to the output of each spatial analysis operation. Thus the accuracy of Spatial Feature Extraction gained more attention as geographic features can hardly formulated in a certain pattern due to intra-class variation and inter-class similarity. Besides these Spatial Feature Extraction further include positional uncertainty, attribute uncertainty, topological uncertainty, inaccuracy, imprecision/inexactitude, inconsistency, incompleteness, repetition, vagueness, noisy, omittance, misinterpretation, misclassification, abnormalities and knowledge uncertainty. To control and reduce uncertainty in an acceptable degree, a Probabilistic shape model is described for Extracting Spatial Features from multi-spectral image. The advantages of this, as opposed to the conventional approaches, are greater accuracy and efficiency, and the results are in a more desirable form for most purposes.
This document summarizes several papers on document clustering techniques. It discusses hierarchical clustering and similarity measures, as well as multi-representation clustering. Several clustering algorithms are examined, including K-means clustering and graph-based clustering. The document also analyzes similarity measures like multi-viewpoint similarity and evaluates the performance of different clustering methods on document collections.
Hierarchal clustering and similarity measures along with multi representationeSAT Journals
Abstract All clustering methods have to assume some cluster relationship on the list of data objects that they really are applied on. Graph-Based Document Clustering works with frequent senses rather than frequent keywords used in traditional text mining techniques.Similarity between a pair of objects can be defined either explicitly or implicitly. With this paper, we analyzed existing multi-viewpoint based similarity measure and two related clustering methods. The main difference between a traditional dissimilarity/similarity measure and ours could be that the former uses merely a single viewpoint, which is the origin, even though the latter utilizes many viewpoints, which you ll find are objects assumed to not have the very same cluster using the two objects being measured. Using multiple viewpoints, more informative assessment of similarity could well be achieved. Theoretical analysis and empirical study are conducted to back up this claim. Two criterion functions for document clustering are proposed dependent on this wonderful measure. We compare them several well-known clustering algorithms which use other popular similarity measures on various document collections confirming the good sides of our proposal. Keywords –Multiview Cluster, Document id, ClusterDistance
Similar to Memory based recognition for 3 d object-kunal (20)
This document presents a novel approach for measuring shape similarity and using it for object recognition. The key steps are:
1) Solving the correspondence problem between two shapes by attaching a descriptor called "shape context" to sample points on each shape. Shape context captures the distribution of remaining points relative to the reference point.
2) Using the point correspondences to estimate an aligning transformation between the shapes. This provides a measure of shape similarity as the matching error between corresponding points plus the magnitude of the transformation.
3) Treating recognition as a nearest neighbor problem to find the most similar stored prototype shape. The approach is demonstrated on various datasets including handwritten digits, silhouettes, and 3D objects
This document summarizes research on an object recognition system that uses distinctive intermediate-level features (e.g. automatically extracted 2D boundary fragments) as keys within a local context region. These keys are assembled within a loose global context to identify objects. The system demonstrates good recognition of a variety of 3D shapes, with tests on over 2000 images evaluating performance under increasing clutter, occlusion, and database size. The system represents an improvement over other methods by being robust to occlusion and clutter without requiring whole-object segmentation.
Object class recognition by unsupervide scale invariant learning - kunalKunal Kishor Nirala
This document presents an approach for unsupervised scale-invariant object class recognition using a probabilistic model. The model represents objects as flexible constellations of parts with probabilistic representations for shape, appearance, scale, and occlusion. An entropy-based feature detector is used to select image regions and scales. A maximum likelihood algorithm estimates the parameters of the scale-invariant object model from unlabeled training images. The model demonstrates good recognition performance on datasets with geometric and non-geometric object classes.
An automatic algorithm for object recognition and detection based on asift ke...Kunal Kishor Nirala
This document presents an automatic algorithm for object recognition and detection based on ASIFT keypoints. The algorithm combines affine scale invariant feature transform (ASIFT) and a region merging algorithm. ASIFT is used to extract keypoints from a training image of the object. These keypoints are then used instead of user markers in a region merging algorithm to recognize and detect the object with full boundary in other images. Experimental results show the method is efficient and accurate at recognizing and detecting objects.
The document discusses various aspects of object-oriented systems development including the software development life cycle, use case driven analysis and design, prototyping, and component-based development. The key points are:
1) Object-oriented analysis involves identifying user requirements through use cases and actor analysis to determine system classes and their relationships. Use case driven analysis is iterative.
2) Object-oriented design further develops the classes identified in analysis and defines additional classes, attributes, methods, and relationships to support implementation. Design is also iterative.
3) Prototyping key system components early allows understanding how features will be implemented and getting user feedback to refine requirements.
4) Component-based development exploits prefabric
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Blood finder application project report (1).pdfKamal Acharya
Blood Finder is an emergency time app where a user can search for the blood banks as
well as the registered blood donors around Mumbai. This application also provide an
opportunity for the user of this application to become a registered donor for this user have
to enroll for the donor request from the application itself. If the admin wish to make user
a registered donor, with some of the formalities with the organization it can be done.
Specialization of this application is that the user will not have to register on sign-in for
searching the blood banks and blood donors it can be just done by installing the
application to the mobile.
The purpose of making this application is to save the user’s time for searching blood of
needed blood group during the time of the emergency.
This is an android application developed in Java and XML with the connectivity of
SQLite database. This application will provide most of basic functionality required for an
emergency time application. All the details of Blood banks and Blood donors are stored
in the database i.e. SQLite.
This application allowed the user to get all the information regarding blood banks and
blood donors such as Name, Number, Address, Blood Group, rather than searching it on
the different websites and wasting the precious time. This application is effective and
user friendly.
Supermarket Management System Project Report.pdfKamal Acharya
Supermarket management is a stand-alone J2EE using Eclipse Juno program.
This project contains all the necessary required information about maintaining
the supermarket billing system.
The core idea of this project to minimize the paper work and centralize the
data. Here all the communication is taken in secure manner. That is, in this
application the information will be stored in client itself. For further security the
data base is stored in the back-end oracle and so no intruders can access it.
Home security is of paramount importance in today's world, where we rely more on technology, home
security is crucial. Using technology to make homes safer and easier to control from anywhere is
important. Home security is important for the occupant’s safety. In this paper, we came up with a low cost,
AI based model home security system. The system has a user-friendly interface, allowing users to start
model training and face detection with simple keyboard commands. Our goal is to introduce an innovative
home security system using facial recognition technology. Unlike traditional systems, this system trains
and saves images of friends and family members. The system scans this folder to recognize familiar faces
and provides real-time monitoring. If an unfamiliar face is detected, it promptly sends an email alert,
ensuring a proactive response to potential security threats.
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
1. Memory-Based Recognition for 3-D Objects
Randal C. Nelson
Department of Computer Science
University of Rochester
Rochester, NY 14627
nelson@cs.rochester.edu
Abstract
Memory-based object recognition methods work by
comparing an object against many representations
stored in a memory, and nding the closest match.
However matches are generally made to representa-
tions of complete objects, hence such methods tend
to be sensitive to clutter and occlusion and require
good global segmentation for success. We describe a
method that combines an associative memory with a
Hough-like evidence combination technique, allowing
local segmentation to be used. This resolves the clut-
ter and occlusion sensitivity of traditional memory-
based methods, without encountering the space prob-
lems that plague voting methods for high DOF prob-
lems. The method is based on the two stage use
of a general purpose associative memory and semi-
invariant local objects called keys. Experiments using
keys based on a curve segmentation process are re-
ported, using both polyhedral and curved objects.
Key Words: Object recognition, Memory-based
representations, Visual learning.
1 Introduction
Object recognition is probably the most researched
area of computer vision. The most successful work
to date has been using model-based systems. Notable
recent examples are 11, 10, 9, 6]. The 3D geomet-
ric models on which these systems are based are both
their strength and their weakness. 7, 8]. On the one
hand, explicit models provide a framework that al-
lows powerful geometric constraints to be utilized to
good e ect. On the other, model schemas are gener-
ally severely limited in the sort of objects that they
can represent, and obtaining the models is typically
a di cult and time-consuming process. There has
been a fair amount of work on automatic acquisition
of geometric models, mostly with range sensors, e.g.,
17, 18, 2] but also visually, for various representations
19, 3, 1, 5]. However, these techniques are limited to
a particular geometric schema, and even within their
domain, especially with visual techniques, their per-
formance is often unsatisfactory.
Memory-based object recognition methods have
been proposed in order to make recognition systems
more general, and more easily trainable from visual
Support for this work was provided by ARPA UMD sub-
contract Z8440902,ONR grant N00014-93-I-0221,and NSF IIP
Grant CDA-94-01142
data. Most of them essentially operate by comparing
an image representation of object appearance against
many prototype representations stored in a memory,
and nding the closest match. They have the advan-
tage of being fairly general, and often easily trainable.
In recent work, Poggiohas recognized wire objects and
faces 15, 4]. Rao and Ballard 16] describe an ap-
proach based on the memorization of the responses of
a set of steerable lters. Mel 12] takes a somewhat
similar approach using a database of stored feature
vectors representing multiple low-level cues. Murase
and Nayar 13] nd the major principal components
of an image dataset, and uses the projections of un-
known images onto these as indices into a recognition
memory.
In general, meory-based methods have proven to
be a useful technique; however because matches are
generally made to representations of complete objects,
these methods tend to be more sensitive to clutter and
occlusion than is desirable, and require good global
segmentation for success. Hough transform methods
(and other voting techniques), on the other hand, al-
low evidence from disconnected parts to be e ectively
combined, but the size of the voting space increases
exponentially with the number of degrees of visual
freedom. Di culties deriving from the size of this
space make it di cult to apply such techniques di-
rectly when more than about 3 DOF are involved,
thus limiting the use of the technique for 3D object
recognition, which generally involves at least 6 DOF.
We describe a method that, by combining an asso-
ciative memory with a Hough-like evidence combina-
tion technique, resolves both the clutter and occlusion
sensitivity of traditional memory-based methods, and
the space problems of voting methods for high DOF
problems. The method is based on the two stage use
of a general purpose associative memory. This stores
both semi-invariant, local objects called keys associ-
ated with object hypothesis, and object con guration
hypotheses associated with evidence. Entry of objects
into the memory is an active, automatic procedure.
Experiments using keys based on perceptual groups
of line segments, and on extracted boundary curves
are reported. Results are reported for a set of poly-
hedral objects, and for a set of curved objects, with
databases covering the full rotation space. This is in
contrast to some recent results e.g. Murase and Nayar
13] where essentially only one of the two out-of-plane
rotational degrees of freedom is spanned.
2. 2 The Method
2.1 Overview
As mentioned above, the method based on two
stage use of a general purpose associative memory.
What we describe here is the application of the tech-
nique to principle views recognition of rigid 3-D ob-
jects, but the underlying principles are not dependent
on rigid geometry,
The approach makes use of semi-invariantlocal ob-
jects we call keys. A key is any robustly extractable
part or feature that has su cient informationcontent
to specify a con guration of an associated object plus
enough additional parameters to provide e cient in-
dexing and meaningfulveri cation. Con guration is a
general term for descriptors that provide information
aboutwhere inappearance space animageofanobject
is situated. For rigid objects, con guration generally
implies location and orientation, but more general in-
terpretations can be used for other object types. Semi-
invariant means that over all con gurations in which
the object of interest will be encountered, a matchable
form of the feature will be present a signi cant pro-
portion of the time. Robustly extractable means that
in any scene of interest containing the object, the fea-
ture will be in the N best features found a signi cant
proportion of the time.
The basic idea is to utilize an associative memory
organized so that access via a key feature evokes asso-
ciated hypotheses for the identity and con guration of
allobjects that could have produced it. These hypoth-
esis are fed into a second stage associative memory,
keyed by the con guration, which maintains a prob-
abilistic estimate of the likelihood of each hypothesis
based on statistics about the occurrence of the keys in
the primary database. The idea is similar to a multi-
dimensionalHough transform without the space prob-
lems. In our case, since 3-D objects are represented by
a set of views, the con gurations represent two dimen-
sional transforms. E cient access to the associative
memories is achieved using a hashing scheme.
The approach has several advantages. First, be-
cause it is based on a merged percept of local features
rather than global properties, the method is robust
to occlusion and background clutter, and does not re-
quire prior segmentation. This is an advantage over
systems based on principal components templateanal-
ysis, which are sensitive to occlusion and clutter. Sec-
ond, entry of objects into the memory is an active,
automatic procedure. Essentially, the system explores
the object visually from di erent viewpoints, accumu-
lating 2-D views, until it has seen enough not to mix
it up with any other object it knows about. Third, the
method lends itself naturally to multi-modal recogni-
tion. Because there is no single, global structure for
the model, evidence from di erent kinds of keys can
be combined as easily as evidence from multiple keys
of the same type. The only requirement is that the
con guration descriptions evoked by the di erent keys
have enough commonstructure to allowevidence com-
bination procedures to be used. This is an advantage
over conventional alignment techniques, which typi-
cally require a prior 3-D model of the object. Finally,
the probabilistic nature of the evidence combination
scheme, coupled with the formal de nitions for semi-
invariance and robustness allow quantitative predic-
tions of the reliability of the system to be made.
2.2 General associative memory
Our approach is based on an e cient associative
memory,The basic operation we need is partial match
associationover heterogeneous keys. More speci cally,
we want a structure into which we can store and access
(key, association) pairs where the key and association
objects may be any of a number of disparate types.
Associated with each object type employed as a key is
a distance metric. The ideal query operation takes a
reference key and returns all stored (key, association)
pairs where the key is of the correct type and within
a speci ed distance of the reference key in the appro-
priate metric. In practice, this ideal may have to be
modi ed somewhat for e ciency reasons.
In the current implementation, the memory is just
a large array of buckets each of which can hold a vari-
able number of (key, association) pairs. This allows a
number of di erent access schemes to coexist. In par-
ticular, hashing, array indexing and tree search can
all be implemented e ciently. Associated with each
key type are functions de ning a distance metric and
a search procedure for locating keys in the memory.
This allows a large amount of exibility in the system,
and permits new key types to be added in a modular
fashion.
2.3 Key Features
The recognition technique is based on the the as-
sumption that robustly extractable, semi-invariant
keys can be e ciently recovered from image data.
More speci cally, the keys must posses the following
characteristics. First, they must be complex enough
not only to specify the con guration the object, but to
have parameters left over that can be used for index-
ing. Second, the keys must have a substantial prob-
ability of detection if the object containing them oc-
cupies the region of interest (robustness). Third, the
index parameters must change relatively slowly as the
object con guration changes (semi-invariance). From
a computational standpoint, true invariance is desir-
able, and a lot of research has gone into looking for in-
variant features. Unfortunately, such features seem to
be hardtodesign, especially for2-Dprojections ofgen-
eral 3-Dobjects. Many classicalfeatures do not satisfy
these criteria. Line segments are not su ciently com-
plex, full object contours are not robustly extractable,
and simple templates are not semi-invariant.
A basic con ict that must be resolved is that be-
tween feature complexity and robust detectability. In
order to reduce multiplematches, key features mustbe
fairly complex. However, if we consider complex fea-
tures as arbitrary combinations of simpler ones, then
the number of potential high-level features undergoes
a combinatorial increase as the complexity increases.
This is clearly undesirable from the standpoint of ro-
bust detectability, as we do not wish to consider or
store exponentially many possibilities. The solution is
not to use arbitrary combinations, but to base the
higher level feature groups on structural heuristics
such as adjacency and good continuation. Such per-
3. ceptual grouping processes have been extensively re-
searched in the last few years.
Our current implementation is designed to recog-
nize 3-D objects on the basis of their shape, using a
set of 2-D views as the underlying representation. We
ran experiments with two type of key features: seg-
ment chains, and curve patches. The segment chains,
which were used onlyon the polyhedral database, con-
sisted of perceptual groups of three connected seg-
ments. The angles and length ratios provide matching
and indexing information. The curve patches consist
of curve orientation templates normalized by robust
curve fragments. Speci cally, a curve- nding algo-
rithmis run on animage,producing aset ofsegmented
contour fragments broken at points of high curvature.
The strongest curves are selected as normalizing base
curves, and a xed-size template constructed with the
endpoints ofthe base curve occupyingcanonicalpoints
in the template. All image curves that intersect the
normalized template are mapped into it with a code
specifying their orientation relative to the base curve.
Global curve properties such as total curvature and
compactness serve as index parameters. Final match-
ing of a candidate template involves taking the model
key curve points and verifying that a curve point with
similar orientation lies nearby in the candidate tem-
plate. Essentially this amounts to directional correla-
tion.
3 Recognition Algorithms
The basic recognition procedure consists of four
steps. First, potential key features are extracted from
the image using low and intermediate level visual rou-
tines. In the second step, these keys are used to ac-
cess the associative memory and retrieve information
about what objects could have produced them, and in
what relative con guration. The third step uses this
information, in conjunction with geometric parame-
ters factored out of the key features such as position,
orientation, and scale, to produce hypotheses about
the identity and con guration of potential objects. Fi-
nally, these hypotheses are themselves used as keys
into a second stage associative memory, which is used
to accumulate evidence for the various hypotheses.
In the nal step, an important issue is the method
of combining evidence. The simplest technique is to
use an elementary voting scheme - each piece of evi-
dence contributes equally to the total. This is clearly
not well founded, as a feature that occurs in many
di erent situations is not as good an indicator of the
presence of an object as one that is unique to it. An
evidence scheme that takes this into account would
probably display improved performance. The ques-
tion is how to evaluate the quality of various pieces of
evidence. An obvious approach in our case is to use
statistics computed over the information contained in
the associative memory to evaluate the quality of a
piece of information. Having said this, it is clear that
the optimal quality measure, which would rely on the
full joint probability distribution over keys, objects
and con gurations is infeasible to compute, and we
must use some approximation.
A simple example would be to use the rst or-
der feature frequency distribution over the entire
database, and this is what we do. The actual al-
gorithm is to accumulate evidence proportional to
log(1 + 1=(kx)) where x is the probability of making
the particular matching observation as approximated
from database statistics, and k is a proportionality
constant that attempts to estimate the actual geomet-
ric probabilityassociated with the prediction of a pose
froma key. The underlying modelis that the evidence
represents the log of the reciprocal of the probability
that the particular combination of features is due to
chance. The procedure used makes an independence
assumption which is unwarranted in the real world,
with the result that the evidence values actually ob-
tained are serious overestimates if interpreted as actu-
ally probabilities. However, the rank ordering of the
values, which is allthat is importantinthe recognition
system is robust to this distortion.
In the above discussion we have assumed that the
associative memory already existed in the requisite
form. However, one of the primary attractions of a
memory-based recognition system is that it can be
trained e ciently from image data. The basic pro-
cess of model acquisition is simply a matter of provid-
ing images of the object to the system, running the
key detection procedures on these images, and storing
the resulting (key, association) pairs. The number of
images needed may vary from one, for simple 2-D ap-
plications, to several tens for rigid object recognition,
and possibly more for complicated non-rigid objects.
The process is e cient, and essentially runs in time
proportional to the number of pairs stored in mem-
ory. This is in contrast to many learning algorithms
that scale poorly with the the number of stored items.
(Actually, indexed memory building process are apt
to scale as N log(N), for very large numbers of items.
However, since the processing for all databases run so
far is dominated by the key-feature extraction image
processing, the complexityhas essentially been linear).
4 Experiments
Using the principles described above, we imple-
mented a memory-based recognition system for poly-
hedral objects using rst segment chains, and then
curve patches as the basic keys. We then tested the
curve patch system on a set of complex curved ob-
jects. Component segments were extracted using a
stick-growing method developed recently at Rochester
14], and organized into chains. A modi ed version of
the same algorithm allowed us to extract curves as
well. For objects entered into the database, the best
10 key features were selected to represent the object.
The thresholds on the distance metrics between fea-
tures was adjusted so that it would tolerate approx-
imately 15-20 degrees deviation in the appearance of
a frontal plane (less for oblique ones). The practical
considerations leading to this selection were to allow
the system to discriminate pentagons from hexagons
without requiring more than about 100 views for an
object.
4.1 Experiments on Polyhedral Objects
We rst performed experiments using a set of 7
polyhedral objects from a child's toy. Some of these
are shown in Figure 1. Not shown is a hexagon.
4. Figure 1: Some polyhedral objects used in the test set
The objects are not simple prisms, but have an H-
shaped cross section which produces produces inter-
esting edges and shadow e ects.
We obtained a training database of approximately
150 views of these objects from di erent directions
ranging from 12 for the hexagon, to 60 for the trape-
zoid, and covering all viewing angles except straight-
on from the side, since from that point of view a num-
ber of the objects are indistinguishable without mea-
surements accurate to a few percent. The variation in
the number of views needed is due to varying degrees
of symmetry in the objects. All training images were
acquired under normal room illumination, with the
objects in isolation against a dark background. The
images were used to compile both segment-chain and
curve patch databases.
We then subjected the recognition system to a se-
ries of increasinglystringent tests. Recallthat the geo-
metricdesignofthe geometricindexingsystemensures
invariance to 2-D translation, rotation, and scale. In-
variance to out-of-plane rotations is provided by the
combination of slightly exible match criteria for the
key features coupled with multiple views. Robustness
against clutter and occlusion is provided by the rep-
resentation in terms of multiple local features. The
experiments were designed to test various aspects of
this design. A basic assumption made during these
tests is that some other process has isolated a region of
the image where a recognizable object may occur. We
do not assume prior segmentation, but we do assume
that only one, or at most a few objects of interest (as
opposed to tens or hundreds) will occur in a window
handed to the system. The system has a certain ca-
pability to provide a don't know" result and tends to
do this if no known objects are present. However, we
have not statistically grounded this ability and hence
the results reported here should be considered to be
forced choice experiments.
For the rst test, we acquired additional views of
the isolated objects, from viewpoints intermediate be-
tween the ones in the database. The idea here is to
test the 3-D rotation invariance. No errors were made
in these test cases, even between similar objects such
as the square and the trapezoid, or the pentagon and
Figure 2: View of a group of objects
Figure 3: A set of windows containing objects and mi-
nor clutter. The central object was correctly identi ed
in all these cases.
the hexagon. These results allow us to say that the
system is probably at least 95 percent accurate in sit-
uations of this type. Results from other tests lead
us to believe that the actual performance is, in fact,
somewhat better.
In the second test we took a number of images con-
taining multiple objects viewed from modest angles
(45 degrees or less from overhead) under normallight-
ing against a dark background. An example is shown
in Figure 2. We then supplied the system with win-
dows containing one object and parts of others. Since
the system performs no explicit segmentation of its
own, the intent of this experiment is to test robust-
ness against minor clutter. Examples of the sort of
windows passed to the system are shown in Figure 3.
In twenty plus tests, we observed no errors due to clut-
ter. We also tried examples with two objects in the
window. In this case, the system typically identi ed
one of the objects, and when asked what else was there
identi ed the second as well.
The third experiment was a more severe clutter
test. Here we took pictures of di erent objects held in
a robot hand at various angles. Examples are shown
inFigure 4. This was a harder problemforour system,
and we obtained recognition rates on the order of 75
to 90 percent with the better performance being ob-
tained with the curve-patch keys. These experiments
produced a signi cant number of failures. On analy-
sis, we found that the primary reason for failure was
not crosstalk in the memory caused by clutter, but
poor performance of the low-level feature identi ca-
tion process caused by the added complexity in the
image. Good segmentation is still critical, even when
5. Figure 4: A set of windows containing objects help by
a robot hand representing moderate clutter.
Figure 5: An image with serious clutter.
local features are used.
Finally, we tested the polyhedral recognition sys-
tem on windows from the image shown in Figure
5, which contains fairly serious clutter. The sys-
tem based on segment chains performed essentially at
chance level, while the curve patches worked at around
80 percent. In this case the system tended to confuse
the square and the trapezoid. These gures are so sim-
ple that losing a few of the main edges due to clutter
substantially reduced their discriminability.
In the various tests, the curve patch keys gener-
ally performed as well as or better than the segment
chains. Thisis asexpected, since the the curve patches
can represent allthe features representable by segment
chains, and some others as well. Also, since only a sin-
gle segmented boundary fragment is needed to key a
curve patch, features are less likely to be missed due
to failure in the low-level segmentation. On a typical
window containing one object plus clutter, the index-
ing process in the full database took less than a sec-
ond on a SPARC5 for chains. The low level processing
could take a few seconds, depending complexityof the
image. The matching look longer for the patches, 10
to 30 seconds, since they are more complex, but was
on the order of the low-level processing.
4.2 Experiments on Curved Objects
We have also carried out preliminary experiments
using the curve-patch based system to identify com-
plex curved objects. To carry out these experiments,
we compiled a database of curved objects, with im-
ages taken approximately every 20 degrees over the
Figure 6: Examples of curved objects learned by the
system
entire viewing sphere. For cases without symmetry,
this amounts to approximately 100 images per ob-
ject. The model pictures were taken against a black
background, using di use lighting in order to elimi-
nate specular highlights as much as possible. A full
3-D database was compiled for the six objects shown
in Figure 6, and tests were run as for the polyhedral
case.
In the rst test, as for the polyhedral objects, we
took pictures of isolated objects under the samecondi-
tions as the training images were taken, only at angles
in between the views contributing to the database.
The system performed fairly well in this situation,
with an accuracy somewhere upwards of 90-95 per-
cent.
The second test utilized groups of objects, still on
a dark background, but now under normal lighting
so that specularities and shadows become much more
evident, and at varying scales, which can a ect the
segmentation produced by the curve nder. An ex-
ample of such an image is shown in Figure 7. The
windows passed to the recognizer typically contained
clutter consisting of parts of other objects. As in the
polyhedral case, this sort of clutter, which does not
adversely a ect the performance of the curve extrac-
tor, had no noticeable e ect. For this test the perfor-
mance was slightlydegraded to about 90 percent, with
corruption of the segmentation process accounting for
most of the di erence.
5 Conclusions and Future Work
In this paper we have argued for a memory-access
interpretation of recognition, and proposed a general
framework for memory-based recognition using a 2-
stage association process. We have illustrated the
concept by implementinga memory-based recognition
system for 3-D polyhedral objects using chains of line
segments and curve patches as memorykeys. The sys-
tem actually performs quite well for a small database
of 3-D shapes, and exhibits a certain amount of ro-
bustness against clutter and occlusion. When the al-
gorithm fails, it is not due to crosstalk in the memory,
but to failure of the low-level processes to extract ro-
bust features. We are currently engaged in embedding
the system into a robotic manipulationsystem that we
will use for assembly tasks. We also plan to incorpo-
6. Figure 7: Curved objects with minor clutter.
rate multi-modalfeatures into the database, including
color and texture as well as shape information. We an-
ticipate that this will give us a capability to recognize
less well structured objects such as leaves or clothing
in addition to objects having a strictly de ned shape.
References
1] N. Ayache and O. Faugeras. Hyper: a new
approach for the recognition and positioning of
two-dimensional objects. IEEE Trans. PAMI,
8(1):44{54, January 1986.
2] A. F. Bobick and R. C. Bolles. Representation
space: An approach to the integration of visual
information. In Proc. CVPR, pages 492{499, San
Diego CA, June 1989.
3] R. C. Bolles and R. A. Cain. Recognizing and
localizing partially visible objects: The local-
features-focus method. International Journal of
Robotics Research, 1(3):57{82, Fall 1982.
4] R. Brunelli and T. Poggio. Face recognition:
Features versus templates. IEEE Trans. PAMI,
15(10):1042{1062, 1993.
5] F.Stein and G. Medioni. E cient 2-dimensional
object recgnition. In Proc. ICPR, pages 13{17,
Atlantic City NJ, June 1990.
6] W. E. L. Grimson. Object Recognition by Com-
puter: The role of geometric constraints. The
MIT Press, Cambridge, 1990.
7] W. E. L. Grimson and D. P. Huttenlocher. On
the sensitivity of geometric hashing. In 3rd Inter-
national Conference on Computer Vision, pages
334{338, 1990.
8] W. E. L. Grimson and D. P. Huttenlocher. On
the sensitivity of the hough transform for object
recognition. IEEE PAMI, 12(3):255{274, 1990.
9] D. P. Huttenlocher and S. Ullman. Recognizing
solid objects by alignment with an image. Inter-
national Journal of Computer Vision, 5(2):195{
212, 1990.
10] Y. Lamdan and H. J. Wolfson. Geometric hash-
ing: A general and e cient model-based recogni-
tion scheme. In Proc. International Conference
on Computer Vision, pages 238{249, Tampa FL,
December 1988.
11] D. G. Lowe. Three-dimensional object recogni-
tion from single two-dimensional images. Arti -
cial Intelligence, 31:355{395, 1987.
12] B. Mel. Object classi cation with high-
dimensional vectors. In Proc. Telluride Work-
shop on Neuromorphic Engineering, Telluride
CO, July 1994.
13] H. Murase and S. K. Nayar. Learning and recog-
nition of 3d objects from appearance. In Proc.
IEEE Workshop on Qualitative Vision, pages 39{
50, 1993.
14] R. C. Nelson. Finding line segments by stick
growing. IEEE Trans PAMI, 16(5):519{523,May
1994.
15] T. Poggioand S. Edelman. A network that learns
to recognize three-dimensional objects. Nature,
343:263{266, 1990.
16] R. P. Rao. Top-down gaze targeting for space-
variant active vision. In Proc. ARPA Image Un-
derstanding Workshop, pages 1049{1058, Mon-
terey CA, November 1994.
17] R. K. Ruud M. Bolle and D. Sabbah. Primitive
shape extraction from range data. In Proc. IEEE
Workshop on Computer Vision, pages 324{326,
Miami FL, Nov-Dec 1989.
18] F. Solina and R. Bajcsy. Recovery of parameteric
models from range images. IEEE Trans. PAMI,
12:131{147, February 1990.
19] S. Ullman and R. Basri. Recognition by linear
combinations of models. IEEE Trans. PAMI,
13(10), 1991.