Your SlideShare is downloading. ×
A comparative study on discrete cosine transform energy wavelet transform energy & sobel code methods
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

A comparative study on discrete cosine transform energy wavelet transform energy & sobel code methods

185
views

Published on

For more projects visit @ www.nanocdac.com

For more projects visit @ www.nanocdac.com

Published in: Technology

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
185
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. IJBSCHS (2009-14-01-2) Biomedical Soft Computing and Human Sciences, Vol.14, No.1, pp.11-19 (2009)[Original article] Copyright©1995 Biomedical Fuzzy Systems Association(Accepted on 2008.10.3)11Palmprint Based Biometric System: A Comparative Study onDiscrete Cosine Transform Energy, Wavelet Transform Energyand SobelCode MethodsEdward WONG Kie Yih* G. Sainarayanan** Ali Chekima** School of Engineering and Information Technology, Universiti Malaysia Sabah, Sabah, Malaysia**Department of Electrical and Electronics, New Horizon College of Engineering, Bangalore, IndiaE-mail: drsai@ieee.orgThe paper was received on Nov. 20, 2007.Abstract: Palmprint based biometric identification has gradually attracted the attention of researchers dueto its richness in amount of features. Palmprint contains geometry features, line features, point features,texture features and statistical features. In this paper, simple and effective methodology for palmprint basedidentification system is proposed. The right hand image is captured using a digital camera without peggingor illumination arrangements. The captured image is aligned using identified key points in the hand and thepalmprint region is selected for enhancement and resizing. Different feature extraction methods, namelyDiscrete Cosine Transform energy features, Wavelet Transform energy features and SobelCode are appliedto the resized image to obtain feature vectors. The extracted feature vectors are matched using similaritymeasurement and feedforward backpropagation neural network. The proposed schemes are tested with handimages from 101 individuals.Keywords Palmprint Identification, Discrete Cosine Transform, Wavelet Transform, SobelCode,Similarity Measurement, Neural Network1. IntroductionBiometric is the science of measuring human’scharacteristics for the purpose of authenticating oridentifying the identity of an individual. Two types ofcharacteristics are measured in biometric technologynamely, physiological characteristics and behavioralcharacteristics [1]. Physiological characteristics measurehuman body parts while behavioral characteristicsmeasure the actions produced by human such as sound,signature, or posture. Behavioral characteristics aremore vulnerable to change than the physiologicalcharacteristics.Several types of physiological characteristics used inbiometric are appearance of face, hand geometry,fingerprint, iris and palmprint. Palmprint biometric hasadvantages over other types of biometric system. Thepalmprint acquisition device costs lesser than the iris-scanning device. Palmprint is harder to imitate thanfingerprint. Palmprint biometric system can achievehigher accuracy than hand geometry biometric system.It is also more acceptable than face recognition systemthat may cause privacy issues.The first palmprint biometric identification systemwas introduced a decade ago [2]. It has graduallyattracted the attention of various researchers due to itsrichness in amount of features. Palmprint containsgeometry features, line features, point features, texturefeatures and statistical features that can be used todifferentiate individuals.Palmprint geometry features include palm area, palmlength and palm width. Since these geometry featuresare not distinctive enough to differentiate individuals,they are usually used in hierarchical palmprint biometricor combined with finger geometry features to form handgeometry biometric system [3].The line features are unique for every individual. Theextractions of the palm lines using stack filter [4], Sobeland morphological operation [5], derivative of Gaussian[6] have been applied in earlier works. The difficultyfaced in line features extraction is some people haveunclear palm lines or strong wrinkles. Strong wrinklesare the wrinkles that have approximately the same widthas the principle lines. Thus, extraction of perfect palmlines is the challenging part.The palmprint point features require high-resolutionimage. The high-resolution image can be obtainedthrough scanner. In direct scanning method, scannerused several seconds to scan the whole hand. This isvery unlikely for the user, which is in hurry or requiredto use the system several times in a day. In indirectscanning method, the inked palmprint is applied on aDepartment of Electrical and ElectronicsNew Horizon College of EngineeringBangalore, Indiadrsai@ieee.org
  • 2. Biomedical Soft Computing and Human Sciences, Vol.14, No1 (2009)12piece of white paper before scanned to obtain a high-resolution palmprint image. This method can providemore detailed palmprint but it is timely and cannot beapplied to the real time application.Palmprint texture features are the representation ofpalmprint image in different transform space so that thetargeted feature is emphasized in its transform space.Some of the extracted texture features are using Fouriertransform [7], Discrete Cosine Transform [8] andWavelet Transform [9].Palmprint statistical features use low-resolutionimage. Instead of extracting palmprint visible features,the invisible statistical features are extracted. Some ofpalmprint statistical features methods are PrincipalComponent Analysis [10], Independent ComponentAnalysis [10] and etc. Palmprint statistical features aresometimes grouped into palmprint texture featuresbecause it transforms the palmprint image into statisticalvalue.2. Palmprint Biometric MethodologyA typical palmprint biometric system has four stages.They are image acquisition, image preprocessing,feature extraction and feature matching. Fig. 1 shows atypical palmprint biometric system.Fig. 1. A Typical Palmprint Biometric SystemHand image can be acquired using scanner, CCDcamera or digital camera. Scanner can acquires high-resolution hand images but required more time to scan.CCD camera as suggested in [7] can capture low-resolution image in real time. Since the CCD camerafocusing area is limited, the hand is pegged during theacquisition process. Digital camera can acquire high-resolution image in longer distance compared to CCDcamera. It can also be connected to computer as the realtime acquisition device.Hand image is taken either in pegged or peg-lessenvironment. Pegged hand image reduced the rotationalvariation, but not eliminate them. Since imagealignment is still required, to have more userfriendliness, peg-less acquisition is proposed in thiswork. The hand image is usually taken in front of a darkintensity background to ease the image segmentationprocess. In CCD camera setup, a special lighting isrequired to capture a brighter hand image.Image pre-processing has two important parts, imagealignment and region-of-interest selection. Imagealignment is usually done by referring to the key points.This is because key points do not vary when the handrotates. Key point may be the gaps between two fingers[7] or the tips of the fingers. Besides that, imagealignment using ellipse-fitting method [11] is alsoproposed in earlier work.Region-of-interest (ROI) selection is the cropping ofpalmprint image from the hand image. The ROI maskis square, circle or custom in shape. Square ROI mask isused in most of the earlier work. Circled ROI mask andcustom ROI mask [12] are used if the feature extractorrequired specific types of ROI region.The feature extraction depends on the types offeature targeted. In some cases, different features typesare combined to form a new feature types. For example,the combination of line feature and geometry feature toform line geometry feature are suggested in [13]. Theextracted feature is usually represented and stored as afeature vector.In feature matching, feature vectors can be comparedusing similarity measurement or neural network. Thecomplete overview of the proposed palmprint biometricsystem is shown in Fig. 2.Fig. 2. Complete Overview of Proposed PalmprintBiometric SystemIn this work, a palmprint based biometric system thatcan be work in peg-less environment is proposed. Theright hand image of 101 individuals is acquired using adigital camera. The key point in the hand image isdetermined before the palmprint image is selected andextracted. The palmprint image is normalized and itsfeatures are extracted. Three different feature extractionmethods, namely, Discrete Cosine Transform (DCT)energy feature [14], Wavelet Transform (WT) energyfeature [15] and SobelCode [16], which can be used torepresent the palmprint effectively is investigated. Thepalmprint feature is stored as feature vectors andmatched using similarity measurement and neuralnetwork to identify the identity of the individuals.The rest of the paper is organized as follows: Section3 explains the condition during hand image acquisition.Section 4 determines the key points in the hand image.Palmprint image selection and extraction are explainedin Section 5 and 6 respectively. Section 7 gives theoverview of feature extraction and representation. DCTenergy feature and WT energy feature methods werebrief in Section 8 and 9 respectively. Section 10explains the SobelCode methods and someimprovement to further increase the overall accuracy.Section 11 describes the similarity measurement and theneural network used. The results and discussion amongdifferent type of methods are done in Section 12.Section 13 summarizes the conclusion obtained fromthis work.
  • 3. E. WONG, et. al: Palmprint Based Biometric System: A Comparative Study133. Hand Image AcquisitionThe right-hand images of 101 different users aretaken in front of a uniform dark intensity backgroundusing Canon PowerShot A430 digital camera. Thefingers are required to spread apart and the hand is leanagainst the background. Two different types ofbackground namely, black and dark blue colorbackgrounds are used in this work. Selection ofdifferent background color is to test the robustness ofthe algorithm to differentiate the hand image from thebackground when the background changes. No pegs areused to align the hand and no lighting arrangements aremade in this setup compared to the earlier works in [7].Fig. 3 shows one of the right hand images.Fig. 3. Captured Hand Image4. Key Point DeterminationThe hand image is represented using Red-Green-Blue (RGB) format. Since the skin of the hand containsreddish color, the red channel is selected for imagesegmentation. The red channel of the hand images isseparated from its background using Otsu’s method [17].Otsu’s method calculates the suitable globalthresholding value for every hand image according tothe variances between two classes. One of the classes isthe background while the other one is the hand image.Fig. 4 shows the extracted binary image using Otsu’sMethod.Fig. 4. Binary Hand ImageBy referring to the border of the binary image, thelocation of the wrist is determined. The center of thewrist is defined as Pw. The boundary pixels of the handimage are traced using boundary-tracking algorithm[18] from the Pw in the clockwise direction. Theconnectivity between the boundary pixels is set at 8-connected neighborhood to get a smoother boundaryhand image. Fig. 5 shows the boundary of the handimage.Pw is selected to find the key points for imagealignment because it gives shorter distance at the gapsbetween the fingers while longer distance at the tip ofthe fingers. The distance between the boundary pixelswith Pw is calculated. Fig. 6 shows the distancesbetween boundary pixels in clockwise direction fromthe Pw versus the index of boundary pixels.Fig. 6. Distance of Boundary Pixels from Pw versusBoundary Pixels IndexFrom Fig. 6, it is shown that the maxima points inthe graphs represent the tips of the finger, and theminima points are the gaps between two fingers. Usinga tracking algorithm, the first and third valley of thegraph is located. The first valley in the graph is the gapsbetween little finger and ring finger, Key Point 1 (KP1).The third valley in the graph is the gaps between middlefinger and index finger, Key Point 2 (KP2). The keypoint is circled in Fig. 6.By referring to the boundary pixels index, the KP1and KP2 in hand image are determined. Let α be thelength of the wrist. α is considered long if it is morethan 100 pixels from the edge of the image. β is theorientation of hand respective to the orientation of wrist(up, parallel or down). Fig. 7 shows the definition of αand β in graphical form. Table 1 shows the orientationsof the hand and key point distance differences.Tab. 1. Key Point Distance Differencesα β Up Parallel DownLong 10 7 4Short 5 4 4× PwFig. 5: Boundary of the Hand Image
  • 4. Biomedical Soft Computing and Human Sciences, Vol.14, No1 (2009)14Fig. 7. α and βFrom Table 1, it is noticeable that the maximumdifference between the extracted key points and theexact key points are only 10 pixels, out of the handimage (768 x 1024 pixels).5. Palmprint SelectionA variable size of mask is created to crop theRegion-of-Interest (ROI). By referring to the distancebetween the two key points, the location of thepalmprint area is estimated. Distance between two keypoints, A is calculated using following equation:2yy2xx )2KP1KP()2KP1KP(A −+−= (1)where (x, y) is the coordinate for the key points KP1 orKP2 in the image.A consists of two finger roots and some gapsbetween fingers. The width of the fingers isapproximately 0.5 times of A. To avoid the ROI maskinserted the background between thumb and indexfinger, only half of the finger width is considered. Thus,the max extension at the end of the line connecting KP1and KP2, LineKP is 0.25 times of A. In this work, B isdefines as 0.2 to ensure that the entire ROI mask will besituated within the palmprint area. The length for eachside of the square is calculated usingA)BB1(Length_ROI ×++= (2)Since the palm lines are located below the fingerroots. The ROI mask is lowered P pixels parallel withthe LineKP where P is 0.2 times of A.ABP ×= (3)Fig. 8 shows the selection of palmprint area usingROI mask according to key points.6. Palmprint ExtractionSince the size of the original image is large, a smallerhand image is cropped out from the original hand imagebefore image alignment using key points and palmprintimage extraction. Fig. 9 shows the proposed imagealignment and ROI selection method.Fig. 8. Selection of Palmprint AreaFig. 9. Proposed Image Alignment and ROI SelectionMethodThe cropped hand image is rotated by θ degrees. Thehand images are rotate to align the rotational varianthand images into a predefined direction. θ is calculatedusing the key point as shown in the Fig. 10.Fig. 10. Calculation of θ degreesThe rotated hand image has the value of zeros in thenon-image location. Let S is the diagonal pixels of therotated hand image or pixels where its x and ycoordinate are equal.[ ]),(,),3,3(),2,2(),1,1( llRRRRS = (4)where R is the rotated hand image with size of l x l.The index of the first non-zero pixels (Idx) is located.The rotated hand image is cropped Idx pixels from itsborder to obtain the palmprint image. Fig. 11 and 12show the extracted palmprint image for same individualand different individuals respectively.From Fig. 11, it is shown that the ROI selection forhand image acquired without pegging is unique. Thepalmprint image for different individuals is alsodistinguishable, as in Fig. 12.
  • 5. E. WONG, et. al: Palmprint Based Biometric System: A Comparative Study15Fig. 11. Palmprint Image for Same IndividualFig. 12. Palmprint Image for Different Individuals7. Feature Extraction and RepresentationPalmprint image contains various types of features.Since texture features and line features required low-resolution image and can distinguish people effectively,these features are investigated in this study. Theextracted features are represented in feature vector foreasy comparison in later stage. The featurerepresentation must be short but contains vitalinformation that can differentiate different individuals.Three types of feature extraction and representation,Discrete Cosine Transform (DCT) energy feature,Wavelet Transform (WT) energy feature and SobelCodeare investigated in this work. DCT energy featureanalyze the texture information of the palmprint imagein 4 x 4 blocks where each block is 16 x 16 pixels. WTenergy feature analyze the palmprint in multi-resolutionlevel, where every different resolution level aims atspecific types of palm lines. For each of the detailcoefficient, 4 x 4 blocks are applied and its energyfeature is calculated. Thus, energy features in six-decomposition level are used to represent every singleblock. Since the line features on the palmprint image areunique, SobelCode that extract line features in specificdirection is also tested in this study. SobelCoderepresented the palmprint image pixels by pixels using aresized version of the palmprint image.8. Discrete Cosine TransformDiscrete Cosine Transform (DCT) is a Fourier-related transforms that is equivalent to roughly twice thelength of Discrete Fourier Transform but operating onreal data with even symmetry. After the palmprintimage is extracted, the palmprint image is enhanced toimprove its contrast. The palmprint image is in RGBcolor format. In [20], for each of the color channel ofthe palmprint image, the histogram of the image isadjusted to the full 256 bins. Then, the palmprint imageis converted to grayscale intensity image. The enhancedimage is resized to 256 x 256 pixels to normalize thesize of all the palmprint images.For every 16 x 16 pixels block of normalizedpalmprint images, DCT is applied. The DCT coefficientobtained in every block is separated into four sub-regions. DCT coefficients in each sub-region is squaredand summed to obtain the DCT energy. The DCTenergy feature is arranged to form feature vector. Fig.13 shows the DCT energy features for the Fig. 12.Fig. 13. DCT Energy Feature for Different IndividualsIn Fig. 13, the DCT energy features for the Fig. 12(a) is plotted in light gray color while the DCT energyfeatures for the Fig. 12 (b) is plotted in dark gray color.It is observable that the DCT energy features fordifferent individuals are different in both magnitude andlocation.9. Wavelet TransformIn Discrete Cosine Transform, the palmprint image isanalyzed in single resolution. Since the palm lines, suchas principal lines, wrinkles and ridges can only beacquire in different resolution, multi resolution analysisusing Wavelet Transform is proposed. Multi resolutionwavelet transform can extract different types of line indifferent resolution level. Level one decompositionallows the extraction of ridges information. When thedecomposition level increases, larger palm lines such aswrinkles and principal lines are extracted.The palmprint image is in RGB format. In [21], thepalmprint image is converted to grayscale intensityimage before its histogram is adjusted to full 256 bins.The enhanced image is decomposed into six level ofHaar Wavelet decomposition.Let D represents horizontal (H), vertical (V) ordiagonal (D) details coefficients. For every cD, thewavelet coefficient image is separated into 4 x 4 blocks.For every coefficient blocks, the wavelet coefficientsare squared and summed to obtain its energy value. Thewavelet energies in different decomposition levels arecombined and normalized before arranging it to formthe feature vector. Fig. 14 shows the featurerepresentation method discussed earlier. Fig. 16 showsthe wavelet energy features for interclass palmprintimage respectively.
  • 6. Biomedical Soft Computing and Human Sciences, Vol.14, No1 (2009)16Fig. 14. WT Energy Feature Representation MethodFig. 15. Wavelet Energy Feature for DifferentIndividualFrom Fig. 15, the differences between inter-classfeature vectors (feature vectors from differentindividuals) are clear.10. SobelCodeBesides analyzing palmprint image in differenttexture features, palmprint image can also be comparedusing line features as in SobelCode. In [22], fourdifferent Sobel operators, 0, 45, 90 or 135 degrees Sobeloperator is used to extract the line details of thepalmprint image in the selected direction.Each of the RGB channel in palmprint images areadjusted to the full 256 bins before converted tograyscale intensity image. Then, the enhanced imagesare resized to 60 x 60 pixels. Sobel operator of 0, 45, 90and 135 degrees are applied to the resized palmprintimages. Fig. 16 shows the four different Sobel operators.Fig. 16. 3 x 3 Sobel Operatorin 0, 45, 90 and 135 degreesThe Sobel results are threshold according to thevalue sign. Fig. 7 (a) and (b) is the resized enhancedpalmprint images for the same individual while (c) isthe resized enhanced images for another user.Fig. 18 shows the SobelCode for the resizedenhanced image in Fig. 17.(a) (b) (c)Fig. 17. Resized Enhanced Palmprint Image for(a) and (b) Same Individual (c) Different Individuals(a) (b) (c)Fig. 18. SobelCode for (a) and (b) Same Individual (c)Different Individuals11. Feature MatchingTwo types of feature matching, namely similaritymeasurement and neural network are used in this work.In similarity measurement, the likeliness between twofeature vectors is calculated. Some of the similaritymeasurements are Euclidean distance and Hammingdistance. In this paper, DCT energy features andwavelet energy features are matched using EuclideanDistance while the SobelCode is matched usingHamming Distance. This is because the SobelCode isconsists of only ones and zeroes (logical data).Euclidean distance calculates the summation ofsquared differences between two feature vectors whileHamming Distance calculates the total differencesbetween two feature vectors in terms of pixels.Hamming Distance tends towards one if both of thefeature vectors are approximately the same while tendstowards zero if both of the feature vectors are different.This is just inverse of Euclidean Distance where zerosrepresent both of the feature vectors are the same (nodifferent) while the greater the Euclidean distance value,the more dissimilarity between both of the featurevectors. The general equation for Euclidean distance is=−=klljli FVFVDistE12,, )(_ (5)where Fvi,l = Feature vector i with length k.There are ten sets of hand image in this work. Eachset of the hand image contains one hand image from 101
  • 7. E. WONG, et. al: Palmprint Based Biometric System: A Comparative Study17different individuals. From the Euclidean distancecalculation for five sets of the hand images, a thresholdpoint is calculated. This threshold point is used in theEuclidean distance for the remaining of five sets handimage to separate the genuine user with the imposter.The feature matching for SobelCode using Hammingdistance is as follows. Firstly, the central 56 x 56 pixelsof SobelCodex are cropped out. The croppedSobelCodex is compared with the SobelCodex stored inthe database using sliding window method. Fig. 19shows the sliding window method used to find theHamming distance.Fig. 19. Sliding WindowThe Hamming distance is defined as = = =⊕=4056156121 )),(),((h i jjiFVjiFVHDy (6)where h = {0, 45, 90, 135 degrees), i and j is the rowand column of the feature vector, y = number ofhamming distance and ⊕ = exclusive OR operation. Atotal of 25 hamming distance can be obtained using thesliding window method between two differentSobelCode. The minimum of the 25 hamming distanceis:(7)where y = hamming distance from 1 to 25. Theminimum hamming distance is then normalized to range0 and 1 using the following formula:456561××−=HDHD (8)where 56 x 56 is the size of the cropped SobelCodex and4 is SobelCode in different directions.Scaled conjugate gradient-based feedforwardbackpropagation neural network is also used in thiswork as classifier. The proposed neural network consistsof two layers, which are one hidden layer and oneoutput layer. The input layer has the size of the featurevector. The hidden layer has equal or double the size ofthe input data. The output layer has 101 neurons torepresent the identity of every user. Tangent sigmoidactivation function is used in the network training.There are ten sets of hand image in this study. Threesets of the hand image are used for neural networktraining. Another three different sets of hand images areused to find the threshold point. The threshold point actsas a breakpoint to differentiate the genuine users withthe imposters. The remaining four sets of data are usedfor to test the trained network. The neural networkstopping condition are when the maximum epochsreached 20 000, when the minimum gradient of 10-7hasbeen reached or when the desired performance goal hasbeen reached.The accuracy of the neural network is calculated asthe total number of correctly identified template overtotal number of user template. After the threshold pointis obtained, the threshold point will be set as a globalthreshold. When a hand image is inserted into thesystem, the image is pre-processed and its feature isextracted and represented. The feature vector issimulated with the trained network. If the feature vectorbelongs to the genuine individuals, a true positive isobtained. If it is belongs to others, a false positive orfalse acceptance is detected.12. ResultsFalse Acceptance Rate (FAR) is the percentage ofwrongly accepted individuals over the total number ofwrong matching. False Rejection Rate (FRR) is thepercentage for number of wrongly rejected individualsover the total number of correct matching. In similaritymeasurement (Euclidean Distance and HammingDistance), the accuracy is calculated as follows:100))2(1( ×+−=FRRFARAccuracy (9)Table 2 shows the FAR, FRR and accuracy ofdifferent types of feature using Euclidean Distance orHamming Distance.Tab. 2. Similarity Measurement ResultsFeatureTypeSMEnhance Image TypeE0 E1 E2 E3 E4DCTEnergyED 90.95 94.47 94.40 94.58 94.38WTEnergyED 88.10 91.71 93.90 91.68 93.84Sobel-CodeHD 97.30 97.30 97.32 97.30 97.31Note: SM – Similarity MeasurementED – Euclidean DistanceHD – Hamming DistanceE0 – Original Grayscale Palmprint ImageE1 – Adjusted Palmprint ImageE2 – Histogram Equalized Palmprint ImageE3 – Individually Adjusted Palmprint ImageE4 – Histogram Equalized IndividuallyAdjusted Palmprint Image)min( yHDHD =
  • 8. Biomedical Soft Computing and Human Sciences, Vol.14, No1 (2009)18The results of the Table 2 are obtained from the samedatabase with 1000 hand images (10 right hand imagesfor 100 different individuals). From Table 2, it is shownthat the SobelCode can achieve higher accuracy thanDCT energy feature and WT energy feature. Theenhanced palmprint images (E1 to E4) can achievehigher accuracy compared to the original grayscalepalmprint images. The histogram equalized methodswork better in Wavelet Energy and SobelCode methods.Histogram equalized palmprint image can achievehigher accuracy than the histogram equalizedindividually adjusted palmprint image.DCT energy feature represents every block with asingle energy value. WT energy feature uses severalenergy values present the same image blocks indifferent resolution levels. The repetition of the datarepresenting the same block has reduced the accuracy ofthe system in this study. In another hand, SobelCodewho also representing every block with a logical valuecan provide a better accuracy than the DCT energyfeature. This is because SobelCode has 60 x 240 =14400 comparisons to determine the identity of theindividual than the DCT energy feature with the featurelength of 1024.Fig. 20 shows the graph of FAR versus FRR for theSobelCode Method [16]. Equal Error Rate or ERR is theline where the FAR is equal to the FRR.Fig. 20. Graph FAR versus FRR for SobelCode MethodFrom Fig. 20, FAR equal to FRR at point 0.035, theaccuracy of the system using ERR is 93.32 percent orcalculated using the equation below: The FAR and FRRin Fig. 20 is normalized to 0.5, thus the ERR valueneeds to be multiplied with 2 to get the correct ERR.100))2(1( ××−= ERRAccuracy (10)The SobelCode is too large to form an input featurevector in neural network. To put the SobelCode as theinput of the neural network, 14400 (60 x 240 pixels)input nodes are required. Moreover, the neural networktraining is computationally expensive. Thus, theSobelCode method is not tested using neural network inthis study. Table 3 shows the results of the neuralnetwork. The hidden neurons are set at the value equalto the length of the feature vector. In this case, the DCTenergy feature has 1024 hidden neurons while WTenergy feature has 288 neurons.Tab. 3. Neural Network ResultsE1 E2 E3 E4DCT Energy 96.02 95.87 96.41 96.02WT Energy 98.17 97.89 98.62 98.40From Table 3, it is noticeable that neural networkcan increase slightly the accuracy of the palmprintbiometric system compared to similarity measurementmethods. An accuracy of 98 percent can be obtainedusing the WT energy features method using neuralnetwork. Wavelet energy features may not be as similaras DCT energy features, in terms of similaritymeasurement, but it is better when it used neuralnetwork to classify.In this work, the time taken to identify an individualis less than 10 seconds. The program coding will beoptimized in the near future.13. ConclusionThe right hand image of 101 different individuals isacquired using a digital camera. The hand image istaken without pegging or lighting illumination. Then,the hand image is segmented and its palmprint area isextracted using a square ROI mask. From Fig. 11, 12and 17, it is shown that the extracted palmprint image isunique in different individuals. The palmprint imagesare enhanced and resized to a predefine size. DCTenergy feature, WT energy feature and SobelCodeinformation is extracted from the resized enhancedpalmprint image. The feature vectors are comparedusing similarity measurement and neural network.From the study, DCT energy features can obtain thehighest accuracy compared to wavelet energy andSobelCode. An accuracy of 96.41 percent can beobtained using the DCT energy features method usingneural network. For similarity measurement, SobelCodecan achieve an accuracy of 94.84 percent compared toDCT energy feature (94.62 percent) and WT energyfeature (91.52 percent). All of the feature extractionmethod can achieve more than 91 percent of accuracy.Neural network classifies the palmprint image betterthan the similarity measurement. However, similaritymeasurement is faster and easier to implement since itdoes not required training. In the future, palmprintbiometric identification system using multiple featureswill also be suggested to further improve the accuracyof the system.References[1] David Zhang (2004): “Palmprint Authentication”,Kluwer Academic Publishers.[2] W. Shu and D. Zhang (1998): “Automated PersonalIdentification by Palmprint”, Optical Engineering, vol.37, no. 8, pp. 2659-2362.
  • 9. E. WONG, et. al: Palmprint Based Biometric System: A Comparative Study19[3] Yaroslav Bulatov, Sachin Jambawalikar, Piyush Kumarand Saurabh Sethia (2002): “Hand recognition usinggeometric classifiers”, Manuscript.[4] P.S. Wu and M. Li: (1997): “Pyramid edge DetectionBased on Stack Filter”, Pattern Recognition Letter, vol.18, no. 4, pp. 239-248.[5] C.C. Han, H.L. Cheng, C.L. Lin and K.C. Fan (2003):“Personal Authentication using Palm-print features”,Pattern Recognition, vol. 36, issue 2, pp 371-381.[6] Xiangqian Wu, Kuanquan Wang, and David Zhang(2006): “Palmprint Texture Analysis Using Derivative ofGaussian Filters”, International Conference onComputational Intelligence and Security 2006, Vol. 1,Iss., Nov. 2006, pp 751–754.[7] D. Zhang, W. K. Kong, J. You, M. Wong (2003):“Online Palmprint Identification”, IEEE Trans. onPattern Analysis and Machine Intelligence, vol. 25, no. 9,pp. 1041-1050.[8] Xiao-Yuan Jing, David Zhang (2004): “A face andpalmprint recognition approaches based on discriminantDCT feature extraction”, IEEE Transactions on Systems,Man, and Cybernetics, Part B 34(6), pp 2405-2415.[9] Xiang-Qian Wu, Kuan-Quan Wang and David Zhang(2002): "Wavelet Based Palmprint Recognition",Proceedings of the First International Conference onMachine Learning and Cybernetics, Beijing, pp. 1253-1257.[10] Tee Connie, Andrew Teoh, Michael Goh and David Ngo(2003): “Palmprint Recognition with PCA and ICA”,Conference of Image and Vision Computing NewZealand 2003 (IVCNZ’03), pp 227-232.[11] A. Kumar, D.C.M. Wong, H.C. Shen and A.K. Jain(2003): “Personal Verification using Palmprint and HandGeometry”, Proceedings of Fourth InternationalConference on Audio- and Video-Based BiometricPerson Authentication (AVBPA), pp 668-678.[12] C. Poon, D.C.M. Wong, H.C. Shen (2004): “A NewMethod in Locating and Segmenting Palmprint intoRegion-of-interest”, Proceedings of the 17th InternationalConference on Pattern Recognition 2004 (ICPR 2004).Vol. 4, pp. 533–536.[13] K. Y. E. Wong, G. Sainarayanan and Ali Chekima(2006): “Palmprint Authentication using RelativeGeometric Features”, 3rdInternational Conference onArtificial Intelligent and Engineering Technology(ICAIET 2006), pp 743-748.[14] K. Y. E. Wong, G. Sainarayanan and Ali Chekima(2007): “Palmprint Identification Using Discrete CosineTransform”, World Engineering Congress 2007, pp 85-91.[15] K. Y. E. Wong, G. Sainarayanan and Ali Chekima(2007): “Palmprint Identification Using Wavelet Energy”,Intl. Conf. On Intelligent & Advanced Systems (ICIAS2007), 25th-28thNov 2007.[16] K. Y. E. Wong, G. Sainarayanan and Ali Chekima(2007): “Palmprint Identification Using SobelCode”,Malaysia-Japan International Symposium on AdvancedTechnology (MJISAT 2007), 12th-15thNov 2007[17] Otsu, N. (1979): “A Threshold Selection Method FromGray-Level Histograms,” IEEE Transactions on Systems,Man, and Cybernetics, Vol. 9, No. 1, 1979, pp. 62-66.[18] The Mathworks, Image Processing Toolbox,“bwtraceboundary” (2005): URL: http://www.mathworks.com/access/helpdesk/help/toolbox/images/bwtraceboundary.html, Accessed on 6 Nov 2007.Edward WONG Kie YihHe received the B.Sc. degree inmicroelectronics from Campbell University,North Carolina, USA in 2005. He has beenworking as a research assistant at UniversitiMalaysia Sabah, Malaysia. His presentresearch interests include the biometric andimage processing. He is a graduate memberof IEEE.G. SainarayananHe completed his B.E (Electronics &Instrumentation) from AnnamalaiUniversity and M.E (Control Systems)from PSG College of Technology and Ph.Din Image Processing from UniversityMalaysia Sabah in 2002. His areas ofresearch are Image processing, ArtificialIntelligence and Control System. He haspublished one book, four book chaptersand 85 technical publications inInternational Journals and Conferences.He is recipient of Sir Thomas Memorialaward from Institution of Engineers(India) and Excellent Scientist awardfrom Ministry of Higher learning,Malaysia. His research outcomewon Silver Medal in 32nd InternationalExhibition of Inventions, NewTechniques and Products, Geneva,Switzerland, April 2004 Currently he isworking as Professor and Head ofDepartment Electrical and ElectronicsEngineering in New Horizon College ofEngineering, Bangalore.Ali ChekimaHe received his BEngg in Electronics fromEcole Nationale Polytechnique of Algiers in1976 and his MSc and PhD both inElectrical Engineering from RensselaerPolytechnic Institute Troy, New York, in1979 and 1984 respectively. He joined theElectronics Department at the EcoleNationale Polytechnique in 1984, where hewas Chairman of the Scientific Committeeof the Department as well as in charge ofthe Postgraduate Program while teaching atboth graduate and undergraduate levels. Hewas member of several scientificcommittees at the national level. He hasbeen working as an Associate Professor atthe School of Engineering and InformationTechnology at Universiti Malaysia Sabahsince October 1996. His research interestsinclude Source Coding, Antennas, SignalProcessing, Pattern Recognition, MedicalImaging, Biometrics, Data Compression,Artificial Intelligence and Data Mining. Hehas published more than 80 papers inrefereed journals, conferences, bookchapters and research reports.Photographofthe secondauthor