Object recognition in computer vision is the task of finding a given object in animage or video sequence. Humans recognize a multitude of objects in imageswith little effort, despite the fact that the image of the objects may varysomewhat in different view points, in many different sizes / scale or even whenthey are translated or rotated. Objects can even be recognized when they arepartially obstructed from view. This task is still a challenge for computer visionsystems in general.Object recognition concerned with determining the identity of an object beingobserved in the image from a set of known labels. Oftentimes, it is assumed thatthe object being observed has been detected or there is a single object in theimage.Object recognition system finds objects in the real world from an image of theworld from an image of the world, using object models which are known a priori.Humans perform object recognition effortlessly and instantaneously
An object recognition system must have thefollowing components to perform the task: Model Data Base Feature Detector Hypothesizer Hypothesis verifier
Model Data Base - contains all the models known to the system. Theinformation in the model database on the approach used for recognition. Themodels of objects are abstract feature vectors, as discussed later in this section.A feature is some attribute of the object . Size, color, and shape are thecommonly used features. Feature Detector – applies operators to images and identifies locations offeatures that help in forming object hypothesis. The features used by a systemdepend on the types of objects to be recognized. Model Data Base – Using the detected features in the image, it assignslikelihoods to objects present in the scene. Used to reduce the search space forthe recognizer using certain features. Verifier– uses object models to verify the hypotheses and refines thelikelihood of objects. The system then selects the object with the highestlikelihood, based on all the evidence, as the correct object.
Object or model representation: How shpuld objects be represented in themodel database? – For some objects, geometric descriptions may be available andmay also be efficient, while for another class one may have to rely on generic orfuntional features. Feature Extraction: Which features should be detected, and how can they bedetected reliably? – Most features can be computed in two dimensional images butthey are related to three-dimensional characteristics of objects. Feature-model matching: How can a set of likely objects based on the featurematching be selected? – this step uses knowledge of the application domain toassign some king of probability or confidence measure to different objects in thedomain. Object Verification - How can object models be used to select the most likelyobject from the set of probable objects in a given image? – The presence of eachlikely object can be verified by using their models.
Scene Constancy: the scene complexity will depend onwhether the mages are acquired in similar conditions(illumination, background, camera parameters, andviewpoint) as the models. image-models spaces: Images may be obtained such thatthree-dimensional objects can be considered two-dimensional. Number of Objects in the model database: If the numberof objects is very small, one may not need the hypothesisformation stage. Number of objects in an image and possibility ofocclusion: If there is only one object in an image, it may becompletely visible.
Two-DimensionalIn many applications, images are acquired from a distance sufficient toconsider the projection to be orthographic. If the objects are always in onestable position in the scene, then they can be considered two-dimensional.In these applications, one can use a two-dimensional model base. There aretwo possible cases: Objects will not be ocluded, as in remote sensing and many industrialapplications. Objects may be occluded by other objects of interest or be partiallyvisible, as in the bin of parts problem
Three-DimensionalIf the images of objects can be obtained from arbitrary viewpoints, then anobject may appear very different in its two views. For object recognition usingthree-dimensional models, the perspective effect and viewpoint of the imagehave to be considered. The fact that the models are three-dimensional and theimages contain only two-dimensional information affects object recognitionapproaches. Again, the two factors to be considered are whether objects areseparated from other objects or not.For tree-dimensional cases, one should consider the information used in the objectinformation used in object recognition task. Two different cases are: Intensity: There is no surface information available explicitly in intensityimages. Using intensity values, features corresponding to the three-dimensionalstructure of objects should be recognized. 2.5-dimensional images: In many applications, surface representations withviewer-centered coordinates are available, or can be computed, from images.This information can be used in object recognition.
3D object recognition based on the use colored stripes—so called structured light—isuseful in applications ranging from 3D face recognition to measuring suspensionsystems and ensuring a perfect fit for hearing aids
Uses description of objects in a coordinate system attached to objects. This descriptionis usually based on three dimensional features or description of objects. These areindependent of the camera parameters and location. Thus, to make them useful forobject recognition, the representation should have enough information to produceobject images or object features in images for a known camera and viewpoint. a.) an object is shown with its prominent local features highlighted. b.) graph representation is used for object recognition using a graph matching approach.
Many types of features are used for object recognition. Most features are based oneither regions or boundaries in an image. It is assumed that a region or a closedboundary corresponds to an entity that is either an object or a part of an object. An object and its partial representation using multiple local and global features
Depending on the complexity of the problem, a recognition strategymay need to use either or both the hypothesis formation andverification steps
Face recognition is a rapidly growing field today for is many uses in the fieldsof biometric authentication, security, and many other areas. There are manyproblems that exist due to the many factors that can affect the photos. Whenprocessing images one must take into account the variations in light, image quality,the persons pose and facial expressions along with others. In order to successfully beable to identify individuals correctly there must be some way to account for allthese variations and be able to come up with a valid answer.Figure Differences in Lighting and Facial Expression
Face recognition is an image processing application for automaticallyidentifying or verifying a person from a digital image or video frame from a videosource. One of the ways to do this is by comparing selected facial features from theimage and a facial database. Some facial recognition algorithms identify faces by extracting landmarks, or features, from an image of the subjects face. For example, an algorithm may analyze the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw. These features are then used to search for other images with matching features. Other algorithms normalize a gallery of face images and then compress the face data, only saving the data in the image that is useful for face detection.
Face recognition used in: - Human and computer Interface - Biometric identification Objective of Face recognition : -to determine the identity of a person from a given image.Complications occur due to variations in: - Illumination - Pose -Facial expression -Aging -occlusions such as spectacles, hair, etc.Weaknesses: -Face recognition is not perfect and struggles to perform under certain conditions. -Other conditions where face recognition does not work well include poor lighting, sunglasses, long hair, or other objects partially covering the subject’s face, and low resolution images. -less effective if facial expressions vary
Facial Recognition uses mainly the following techniques:•Facial geometry: uses geometrical characteristics of the face. May use severalcameras to get better accuracy (2D, 3D...)•Skin pattern recognition (Visual Skin Print)•Facial thermogram: uses an infrared camera to map the face temperatures•Smile: recognition of the wrinkle changes when smiling
The uniqueness of Skin Textureoffers an opportunity to identifydifferences between identicaltwins.The Surface Texture Analysisalgorithm operates on the toppercentage of results asdetermined by the Local featureanalysis.
Finger Printing is one of the most well-known and publicized biometrics. Because oftheir uniqueness and consistency over time, fingerprints have been used foridentification for over a century, more recently becoming automated due toadvancements in computing capabilities. Fingerprint identification is importantbecause of the inherent ease in acquisition, the numerous sources available forcollection, and their established use and collections by law and immigration. A Fingerprint usually appears as a series of dark lines that represent the high,peaking portion of friction ridge skin, while the valleys between these ridges appearsas white space and are low, shallow portion of the friction ridge skin. Finger Identification is based primarily on the minutiae, or the location anddirection of the ridge endings and bifurcations (splits) along a ridge path
HardwareA variety of sensor types – optical, capacitive, ultrasound, and thermal – areused for collecting the digital image of a fingerprint surface.•Optical sensors take an image of the fingerprint, and are the most commonsensor today•Capacitive sensor determines each pixel value based on the capacitancemeasured, made possible because an area of air has significantly lesscapacitance than an area of finger.SoftwareThe two main categories of fingerprint matching techniques are minutiae-basedmatching and pattern matching.• Pattern Matching simply compares two images to see how similar they are.Usually used in fingerprint systems to detect duplicates.• Minutiae-based matching relies on the minutiae points described above,specifically the location and direction of each point.
Geometry-based approaches- early attempts on object recognitionwere focused on using geometric models of objects to account for theirappearance variation due to viewpoint and illumination change.Appearance-based algorithms- advanced feature descriptors andpattern recognition algorithms are developed. Computes eigenvectors from a setof vectors where each one represents one face imageFeature-based algorithms- lies in finding interest points, often occuredat intensity discontinuity, that are invariant to change due to scale, illuminationand affine transformation.