• Save
Eye-Free Barcode Detection on Smartphones with Niblack's Binarization and Support Vector Machines
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Eye-Free Barcode Detection on Smartphones with Niblack's Binarization and Support Vector Machines

on

  • 1,574 views

 

Statistics

Views

Total Views
1,574
Views on SlideShare
1,032
Embed Views
542

Actions

Likes
0
Downloads
1
Comments
0

28 Embeds 542

http://vkedco.blogspot.com 329
http://www.vkedco.blogspot.com 117
http://vkedco.blogspot.in 17
http://vkedco.blogspot.com.ar 9
http://vkedco.blogspot.com.es 8
http://www.vkedco.blogspot.in 7
http://vkedco.blogspot.fr 7
http://vkedco.blogspot.de 6
http://vkedco.blogspot.pt 6
http://vkedco.blogspot.tw 6
http://vkedco.blogspot.co.uk 4
http://vkedco.blogspot.nl 3
http://vkedco.blogspot.ca 3
http://www.vkedco.blogspot.com.au 2
http://vkedco.blogspot.hu 2
http://vkedco.blogspot.ch 2
http://vkedco.blogspot.co.il 2
http://vkedco.blogspot.com.br 2
http://vkedco.blogspot.ru 1
http://vkedco.blogspot.sg 1
http://vkedco.blogspot.jp 1
http://vkedco.blogspot.kr 1
http://vkedco.blogspot.se 1
http://www.vkedco.blogspot.jp 1
http://vkedco.blogspot.it 1
http://vkedco.blogspot.com.au 1
http://www.vkedco.blogspot.de 1
http://www.vkedco.blogspot.com.es 1
More...

Accessibility

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Eye-Free Barcode Detection on Smartphones with Niblack's Binarization and Support Vector Machines Document Transcript

  • 1. Eyes-Free Barcode Detection on Smartphones with Niblack’s Binarization and Support Vector Machines Vladimir Kulyukin and Aliasgar Kutiyanawala and Tanwir Zaman Computer Science Department, Utah State University, Logan, UT, USAAbstract— An eyes-free barcode detection algorithm is pre- device increases their ergonomic load. Supermarkets maysented for blind and visually impaired (VI) smartphone also resist installing sensors on the premises due to subse-users. The algorithm uses Niblack’s binarization filter and quent maintenance costs and customer privacy concerns.support vector machines (SVMs) to detect barcode presence An optimal solution for both sides is accessible shoppingin image regions. The algorithm is implemented on the systems that work on the devices that VI individuals alreadyGoogle Nexus One smartphone with Android 2.3.3. The own and know how to operate and place no instrumenta-algorithm was evaluated in a software experiment on real tion requirements on supermarkets. In 2006, we began ourproduct images and in another experiment by three blind- work on ShopTalk [17], a wearable system for independentfolded sighted individuals who used the smartphone to detect blind supermarket shopping whose key components wereUPC barcodes on real grocery products. Our approach an OQO model 01 computer, a Belkin numeric keypad,complements current R&D efforts on eyes-free barcode scan- and a wireless barcode scanner. The system was our firstning by advocating the position that in some circumstances attempt (and the first attempt reported in the accessiblesophisticated vision techniques may not be needed to make shopping literature) to use MSI shelf barcodes as topologicalbarcode scanning available for VI smartphone users. points for locating products through verbal directions. Our field experiments with ShopTalk showed that VI individualsKeywords: Accessible Shopping, Eyes-Free Barcode Localization can independently scan MSI barcodes on shelves and UPC& Decoding, Assistive Technology, Niblack’s Filter, Support Vector barcodes on products. In 2008 - 2009, when it became clearMachines to us that many VI people had endorsed smartphones as useful devices, ShopTalk was ported onto the Nokia E701. Introduction smartphone running Symbian OS 9.1 [12]. The port was According to the World Health Organization, there are called ShopMobile 1. The smartphone was connected to285,000,000 VI people worldwide, of whom 39,000,000 are Baracoda, a small Bluetooth barcode pencil scanner.blind and 246,000,000 have low vision [26]. Independent In 2010, as megapixel cameras were becoming common-grocery shopping is one of the greatest challenges faced place on smartphones, we began our work on ShopMobileby VI and blind individuals. A typical modern supermarket 2 [15]. Unlike its predecessors, ShopMobile 2 no longerhas a median area of 4,500 square meters and stocks an requires a barcode scanner, because it uses vision techniquesaverage of 38,718 products [2]. Service delays are frequently for barcode recognition. The first version of the system wasreported in the literature on blind shopping [14]. When they implemented on a Google Nexus One smartphone equippedarrive at the store, VI individuals typically request and wait with a five megapixel camera running Android 2.1 on a 1for store staffers to guide them and read product labels to GHz processor with 512 MB of RAM.them. Service delays, however, are not the only accessibility In our previous publications on ShopMobile 2 [30], [15],barrier that VI shoppers have to overcome. Some staffers [39], barcode scanning was divided into three modules: inter-are unfamiliar with the store layout, others become irritated active camera alignment, barcode localization, and barcodewith long product searches, still others may have inadequate decoding. Interactive camera alignment is a closed feedbacklanguage skills. These barriers cause many VI shoppers to loop that allows VI users to align the phone camera withabandon independent shopping and rely on their friends, fixed surfaces in the pitch and yaw planes through vibratoryfamily members, or other caregivers to meet their shopping and audio feedback to improve the quality of capturedneeds. frames. Barcode localization is a process of finding small- Several systems (e.g., RoboCart [18], [13], [10], est rectangular regions likely to contain barcodes. BarcodeShopTalk [21], [16], [17], and GroZi [20], [27]) have been decoding is a process of obtaining digit sequences fromdeveloped to address independent shopping. Unfortunately, rectangular images obtained through barcode localization.these systems rely on specialized hardware or require that We realized the necessity of barcode detection after per-supermarkets be instrumented with special sensors. Many VI forming an experiment in which two VI participants wereindividuals use white canes, handle guide dogs, and operate asked to decode UPC barcodes on ten grocery products.wayfinding devices. Requiring that they operate yet another The participants took an average of 83.6 and 93.4 seconds,
  • 2. respectively, to scan a barcode. When analyzing the video with respect to the barcode for scanning. One exceptionfootage of the experiment, we found that the participants is the system described in [5]. This system was developedspent most of their time looking for barcodes on surfaces that specifically for VI individuals. However, this system assumesdid not contain them. For example, cereal boxes contain six that colored fiducials are placed next to barcodes for fastsurfaces, only one of which has a barcode. As there was no localization. This system also uses a custom-made variationway for the participants to quickly determine which surface of the UPC standard for encoding barcodes. Another excep-contained a barcode, they spent equal time on all surfaces. tion is the algorithm proposed by Gallo and Manduchi [9] One way to reduce barcode scanning times is to add a for decoding barcodes for VI individuals, has not yet beenmodule that determines quickly if a given frame contains tested on smartphones. To the best of our knowlege, neithera barcode before a more expensive barcode localization system handles MSI barcodes.process is applied. In this paper, we present an eyes-free We endorse these R&D efforts but differ from them in thatbarcode detection module which the previous versions of our approach is based on the hypothesis that sophisticatedShopMobile 2 did not include. This module implements vision techniques may not be needed to make barcodean eyes-free barcode detection algorithm for blind and scanning available for VI mobile phone users. Simple visionvisually impaired (VI) smartphone users. The algorithm uses techniques can be augmented with interactive user feedbackNiblack’s binarization filter and support vector machines loops to improve the quality of captured images. We would(SVMs) to detect barcode presence in image regions. The also like to emphasize that our object offers a self-containedmodule is implemented on the Google Nexus One phone solution that runs only on the smartphone without consumingwith Android 2.3.3. any external computing resources. We do not claim that the techniques proposed in this paperare ideal for eyes-free barcode detection. These techniques 3. Use Caseshould be viewed in the context of assistive technology To understand how eyes-free barcode scanning is realizedwhose principal objective is the design, development, and in ShopMobile 2, let us consider a typical use case. Supposeevaluation of accommodation systems for individuals with Alice, a completely blind shopper, wants to scan a UPCspecific disabilities for specific environments. Advances in barcode on a box, a bottle, or a can. Alice knows, throughcomputer vision or wearable and mobile computing are de- training and previous experience, that UPC barcodes aresirable by-products, but, in and of themselves, are necessarily usually located on the bottom side of boxes and on the sidessecondary objectives. of cans and bottles. If the product is a box, she finds the Our paper is organized as follows. In Section 2, we discuss bottom side of the box and aligns one edge of her phonerelated work. Section 3 presents a use case that illustrates with the corresponding edge of the box. If it is a bottle, Alicehow our system is used and where barcode detection fits in. aligns the bottom edge of her phone with the bottom edgeIn Section 4, we describe our eyes-free barcode detection of the bottle. If the product is a can, Alice aligns either thealgorithm based on a modification of Niblack’s classic top or the bottom edge of her phone with the correspondingbinarization filter and uses SVMs to detect the presence of edge of the can.barcodes in image regions. Section 5 presents two experi- After placing her phone on the surface, Alice slowlyments with the system. In the first experiment, our barcode moves it away from the product. The system detects thisdetection software was tested on real product images. In motion and starts a timer, which notifies her through athe second experiment, our software was tested by three beep when she should stop moving her smartphone whenblindfolded sighted individuals who used the smartphone to it reaches a threshold. It is assumed that the phone is moveddetect UPC barcodes on real grocery products. Section 6 slowly without abrupt motions. In actual experiments, eachsummarizes our work and presents conclusions. participant learns these moves. The preset timer value is set to stop when the phone is 10 to 15 cm away from the product2. Related Work surface. The system then starts barcode detection, which Vision-based barcode decoding is a well-known research continuously takes images in video mode and analyzes eachproblem. Ohbuchi et. al. [23] have demonstrated barcode image for the presence of a barcode. If Alice inspects a boxscanning using a camera, a mobile application processor, a and does not find a barcode on the current side, she switchesdigital signal processor (DSP), and a display device. Many to a different side and repeats the entire procedure. If shesystems [25], [1], [19], [4] and applications [37], [28] have inspects a can or a bottle, she holds her phone in placebeen developed for scanning barcodes with mobile phones. and slowly rotates the product in her hand. This strategyThese solutions have been developed for sighted users was discovered in actual experiments with VI participants:and may not be suitable for VI individuals. For example, it turns out that it is more effective to rotate cans than phoneRedLaser [37] and ZXing [28] are two popular barcode cameras.scanning applications for smartphones. These solutions re- When a barcode is detected, the system beeps to letquire that users carefully position the smartphone’s camera Alice know that a barcode is present and attempts to bypass
  • 3. Compute Image Binarize Image using Get Image from Camera Gradients along the x T (x, y) = m(x, y) + k × s(x, y) Modified Niblack Filter and y axes Since computing T (x, y) for each pixel is expensive, Y is Classify Image Determine if a Barcode divided into n × n subimages, and a single threshold T (i, j) Gradients using SVMs Exists in the Image is computed for each subimage Yi,j of Y . Equation 4.1 shows how T (i, j) is computed in our application, where Fig. 1: Barcode detection algorithm. m(i, j) and s(i, j) are the mean and STD, respectively, for all the pixels in Yi,j and k, S & T c are user defined parameters. In our implementation, we set n = 15, k =barcode localization altogether, since it is rather expensive, 0, S = 12.7, and T c = 127. These values were foundand decode the barocde in the captured image directly, which experimentally. Negative values of k make the resultingsometimes works when the image contains little background binarized image lighter (more white pixels) whereas positivenoise (e.g., text and graphics). If the barcode is decoded values – darker (more black pixels). In our implementation,successfully, it is read out to her through speech synthesis. k = 0 was chosen to simplify computation: since k = 0,If not, the system starts barcode localization to identify T (i, j) = m(i, j) when s(i, j) ≥ S.the precise location of the barcode in the captured image.The barcode decoding module then attempts to decode thebarcode in the localized image. If the decoding succeeds, the m(i, j) + k × s(i, j) if s(i, j) ≥ S T (i, j) =barcode is read out to Alice. If not, the image is rotated 90 Tc otherwisedegrees and the above procedure is repeated on the rotated This modified Niblack method is an attempt to combineimage. If the barcode is still not decoded, the system checks the best of global and local methods. Global methods pro-whether the barcode is partially present on one of the sides duce less noise in the binarized image while local methodsand asks Alice to move her smartphone, depending on where preserve details in the image that would have otherwisein the image the barcode part is detected. After the move is been lost due to small variations in illumination. A fixedcompleted, barcode scanning starts from scratch. Interactive threshold T c is used on subimages that show low STDs incamera alignment runs in the background all the time to the grayscale values of their pixels and an adaptive thresholdassist Alice to keep her phone aligned with the product. on subimages that exhibit high STDs in those values. Since barcode regions consist of large numbers of alternating black4. Barcode Detection and white lines, subimages with barcodes show high STDs. Figure 1 shows an overview of the barcode detection Such subimages are thresholded with the adaptive threshold.algorithm. An image is obtained from the camera and Subimages with constant backgrounds exhibit low STDs,binarized into a bi-level image. The binarized image is and are thresholded with the constant threshold T c . Figure 2divided into subimages, and x and y gradients are computed shows an image (top left) and its three binarized counterpartsfor each subimage. These gradient regions are classified by obtained with three binarization methods.two SVMs as barcode or non-barcode regions. Finally, thealgorithm looks at the number of barcode and non-barcode 4.2 Image Gradientsregions to determine if the image contains a barcode. The next step in barcode detection is to compute the image4.1 Binarizing Images gradients B x and B y along the x and y axes. A gradient of a continuous function is its derivate along a particular We have slightly modified Niblack’s binarization fil- direction. Since a binarized image B consists of discreteter [6]. We chose to work with Niblack’s filter, because pixels, its gradients are computed through convolution:we found some research evidence that both Niblack andadaptive Niblack methods minimize noise to the samelevel as Sauvola’s algorithm [7]. Both Niblack and Sauvola B x = B ∗ Gxalgorithms have fixed parameters and deal with varying B y = B ∗ Gybackground images, but Sauvola’s algorithm may be moresensitive to background changes and more difficult to adapt To obtain gradients along the positive and negative di-to varying backgrounds [8]. rections of each axis, we set Gx = [−1, 2, −1] and Gy = Equation 4.1 shows how Niblack’s filter determines the [−1, 2, −1]T . As Figure 3a shows, a product package typi-local threshold T (x, y) for each pixel located at (x, y) in cally consists of four distinct image regions - background,the image Y , where m(x, y) and s(x, y) are the mean and text, graphics, and barcode. Background regions typicallystandard deviation (STD), respectively, for an n × n window exhibit small gradients along both axes wheras text andcentered at (x, y) and k is a user defined parameter, usually graphics exhibit large gradients along both axes. Barcodesnegative. consist of large numbers of parallel alternating black and
  • 4. Graphics Constant Background Text Barcode (a) Original grayscale image (a) Original binarized image Graphics Constant Background Text Barcode (b) Binarization with global threshold (b) Image gradient along x axis Graphics Constant Background (c) Binarization with original Text Niblack method Barcode (c) Image gradient along y axis (d) Binarization with modified Fig. 3: Image gradients along the x and y axes. Niblack method Fig. 2: Binarization results. Bi,j = ΣyBRT L ΣxBRT L |F (x, y)| x y=y x=x y Bi,j = ΣyBRT L ΣxBRT L |G(x, y)| y=y x=xwhite lines packed in small regions. Thus, barcodes with In the above two equations, F (x, y) = 2B(x, y) − B(x −vertical lines exhibit large gradients along the x axis and 1, y) − B(x + 1, y)| and G(x, y) = 2B(x, y) − B(x, y − 1) −small gradients along the y axis. Thus, barcode regions can B(x, y +1). If the subimage Bi,j has a constant background, x ybe characterized as regions with large gradients along one both Bi,j and Bi,j are low. If the subimage contains text oraxis and small gradients along the other. graphics, both values are high. However, if the subimage x y contains a barcode, one of (Bi,j or Bi,j ) is high and the To implement this algorithm, the binary image is divided other low, depending on the orientation of the barcode lines.into n × n pixel subimages Bi,j . Let PT L = (xT L , yT L )and PBR = (xBR , yBR ) be the top left and bottom right 4.3 Support Vector Machine Classificationpoints for each subimage Bi,j , respectively. The gradients A Support Vector Machine (SVM) [3], [11] is a linear x yalong the x and y axes, Bi,j and Bi,j for Bi,j are computed classifier, which is used to classify linearly separable data.as follows: In a typical SVM scenario, given two sets of positive (P )
  • 5. and negative (N ) examples, a line L (a hyperplane in n-dimensional space) can be constructed to separate P fromN . If L is defined as wx + b = 0, where w is normal tothe hyperplane, |b|/||w|| is the perpendicular distance fromthe hyperplane to the origin and ||w|| is the Euclidean normof w. Let d+ and d− be the shortest distances from thehyperplane to the closest positive and negative examples.The margin of the hyperplane is defined as d = d+ + d−and a maximum margin hyperplane is defined as one thatmaximizes this margin. Points above this hyperplane areclassified as elements of P while points below are aselements of N . In our case, P is defined as barcode regions(sub-images) and N as non-barcode regions and classify datapoints {xi , yi }, xi ∈ B x , yi ∈ B y as elements of P or N . Fig. 4: Barcode detection SVMs. For SVM construction, over a hundred images werecollected and manually classified into the two classes: P YTand N . Each image was divided into n × n subimages x y(n = 50) and Bi,j and Bi,j were computed for eachsubimage. The points {xi , yi }, xi ∈ B x , yi ∈ B y for all thesubimages Bi,j within each image were plotted, as shown XL XRin Figure 4. The green points were classified as elements ofP and red points as elements of N . It should be observedthat these points are not linearly separable in the figure.Consequently, it is theoretically impossible to use SVMs to YBclassify them. However, the plot shows that the two regions Fig. 5: Imaginary boundary lines in partial barcode detection.(bottom right and top left) contain only green points and,therefore, represents subimages with barcodes. Thus, twoSVMs - SV M1 and SV M2 (shown by black dashed linesin the figure) can be constructed to classify them usingthe maximum margin hyperplane. A data point (subimage) NL = NL + 1 if xT L ≤ XLcontains a barcode if it lies above SV M1 or below SV M2 . NR = NR + 1 if xBR ≥ XRPoints lying in between are classified as non-barcodes. Animage contains a barcode if it has a sufficiently high number NT = NT + 1 if yT L ≤ YTof points (Np ≥ TN ) above SV M1 or below SV M2 , where NB = NB + 1 if yBR ≥ YBTN is a threshold. If SV M1 is defined as y = mx + c, thenSV M2 is, by symmetry, defined as y = (x−c)/m. We foundthat the values of m = 0.5, c = −15, and TN = 5 yielded XL and XR two imaginary vertical lines drawn on the leftthe maximum barcode detection performance discussed in half and right half of the image, respectively. Similarly, YTSection 5. and YB are two imaginary horizontal lines on the top half and bottom half of the image, respectively. Figure 5 shows such lines. A barcode is assumed to be cropped if any of4.4 Detecting Partial Barcodes the counters, NL , NR , NT and NB , exceeds to the value of Sometimes barcodes can be detected by the barcode threshold NC . In our implementation, we set NC = 1, XL =detection module but cannot be decoded, because they are 0.1 × w, XR = 0.9 × w, YT = 0.1 × h, and YB = 0.9 × h,only partially present in the image. Such barcodes are called where w and h are the width and the height of the inputcropped. Barcodes can usually be decoded if cropped from image, respectively.the top or the bottom since they are redundant along thosedirections. However, they cannot be decoded when they are 5. Barcode Detection Experimentscropped from the left or right. In this case, the user must Two experiments were performed to evaluate the barcodebe asked to move the phone either left or right so that detection performance. The first experiment was a softwarethe barcode becomes fully present in a subsequent frame. experiment. The second experiment was performed by threeToward that end, let (xT L , yT L ) be the top left and bottom blindfolded sighted people. In the first experiment, a total ofright corners of the subimage Bi,j and let NL , NR , NT and 124 images of real products were taken in a real supermarket,NB be defined as follows: out of which 51 images contained a barcode and 73 did
  • 6. Alice Bob Carlnot. These images were manually classified into two sets - Product 1 14 32 13images containing a barcode and images without a barcode Product 2 16 13 34and the barcode detection algorithm was run on both the sets. Product 3 29 45 10In the 51 images with barcodes, barcodes were successfully Product 4 - 39 33 Product 5 37 FP 9detected in 50. In the 73 images without barcodes, a barcode Product 6 61 49 18was detected only in 1 image and not detected in 72 images. Product 7 20 13 13 Product 8 15 12 6 The second experiment was designed to measure the Product 9 78 53 115amount of time it took a person to find a barcode on a real Product 10 9 33 10product. Three blindfolded sighted participants were asked Table 1: Barcode detection times.to find UPC barcodes on ten products (eight boxes, one can,and one bottle) using the Google Nexus One smartphone. Allthree participants were graduate students from the ComputerScience Department of Utah State University. 6. Conclusion The objective of the experiment was explained to each An eyes-free barcode detection algorithm was presentedparticipant. The participant was then blindfolded and trained for blind and VI smartphone users. The algorithm modifieson using the smartphone with the barcode detection soft- Niblack’s binarization filter and uses SVMs for subim-ware package installed on it. The training and the actual age classification. The algorithm was implemented on theexperiment were conducted on two different sets of products. Google Nexus One smartphone with Android 2.3.3, andThe participants were taught to align the smartphone with was evaluated in a software experiment on 124 productproducts and keep it aligned with products through interac- images and by three blindfolded sighted individuals whotive camera alignment. They were taught how to move the used the smartphone to detect UPC barcodes on 10 grocerysmartphone away from each of the three product types so products. In the first experiment, our barcode detectionthat the smartphone was approximately 10 to 15 cm from software was tested on real product images. In the 51 imagesthe product and how to move the phone parallel to product with barcodes, barcodes were successfully detected in 50. Insurfaces when looking for UPC barcodes on them. This the 73 images without barcodes, a barcode was detected onlydistance is important, because, if the camera is too close to in 1 image and not detected in 72 images.the product, it cannot focus. Similarly, if the camera is too In the second experiment, our software was tested by threefar from the product, the barcode occupies too small an area blindfolded sighted individuals who used the smartphonein the image and may not be detected.The participants were to detect UPC barcodes on real grocery products. Thetold to stop when they heard the smartphone beep (barcode experiment was video recorded for subsequent quantitativedetected) or switch to a different side if the current side did and qualitative analysis. Of 30 experimental runs, there werenot contain a barcode (no beep). 2 failures. One participant was unable to detect a barcode Each participant was given a set of products (different on the bottle. Another failure was a false positive. All threefrom the test set) to practice. During this time, we observed participants spent a lot of time on product a Saltine Crackershow they were using the system, pointed out their mistakes, box where the barcode was located on the side instead ofand showed them several ergonomic heursitcs. There was no the bottom.time limit on how long each participant could practice. When We acknowledge that the number of participants in oura participant self-reported that he or she was comfortable experiment was small. Consequently, our quantitative find-with the system, the participant was asked to perform the ings should be interpreted with caution. Nevertheless, ourexperiment on the product test set. The experiment was video findings suggest that relatively simple vision techniquesrecorded for subsequent quantitative and qualitative analysis. can be augmented with interactive user feedback loops to improve the quality of captured images. Our application Table 1 shows the amount of time each participant (Alice, also suggests that self-contained solutions that run only onBob and Carl - fictional names used for identity protection) the smartphones may be feasible without consuming anytook to find barcodes on ten products. It can be observed that external computing resources.28 out of 30 barcodes were detected correctly. Alice was un-able to detect a barcode on the bottle and there was one falsepositive in case of Bob. The barcode detection mean times Referencesfor Alice, Bob and Carl were 31, 32.11 and 26.1 seconds, [1] Adelmann R., Langheinrich M., Floerkemeier, C. A Toolkit for BarCoderespectively. The median values for the participants were Recognition and Resolving on Camera Phones - Jump Starting the20, 33, and 13 seconds, respectively. All three participants Internet of Things. Workshop on Mobile and Embedded Informationspent a lot of time on product 9 (Saltine Crackers) where Systems (MEIS’06) at Informatik 2006, Dresden, Germany, Oct 2006. [2] Food Marketing Institute: Supermarket Facts.the barcode on this product was located on the side instead http://www.fmi.org/facts_figs/?fuseaction=superfact,of the bottom. retrieved July 14, 2011.
  • 7. [3] Burges, C.J.C. A Tutorial on Support Vector Machines for Pattern [22] Normand N. and Viard-Gaudin C. A Two-Dimensional Bar Code Recognition. Data Mining and Knowledge Discovery, Vol. 2, Number Reader. Pattern Recognition, Vol. 3, pp. 201-203, 1994. 2, p. 121-167, Kluwer Academic Publishers, 1998. [23] Ohbuchi, E., Hanaizumi, H., Hock, L. Barcode Readers Using the[4] Chai, D., Hock, F. Locating and Decoding ean-13 Barcodes from Im- Camera Device in Mobile Phones. International Conference on Cyber- ages Captured by Digital Cameras. Proceedings of Fifth International worlds, pp. 260 – 265, Nov. 2004. Conference on Information, Communications and Signal Processing, [24] RedLaser: Redlaser. http://redlaser.com/, retrieved May 19, pp. 1595–1599, Dec. 2005. 2011.[5] Coughlan, J., Manduchi, R., Shen, H. Cell Phone-based Wayfinding for [25] Rohs, M. Real-world Interaction with Camera-phones. 2nd Inter- the Visually Impaired. Proceedings of First Int. Workshop on Mobile national Symposium on Ubiquitous Computing Systems, pp. 74–89, Vision, Graz, Austria, May 2006. Springer, 2004.[6] Niblack, W. An Indroduction to Digital Image Processing. Prentice [26] World Health Organization, http://www.who.int, 2011. Hall, pp.115-116. [27] Winlock, T., Christiansen, E., Belongie, S. Toward Real-Time Grocery[7] Sauvola, J. and M. Pietikainen. Adaptive Document Image Binarization. Detection for the Visually Impaired. Computer Vision Applications for Pattern Recognition 33, pp. 225-236, 2000. the Visually Impaired (CVAVI), San Francisco, CA, June 2010. [28] Zxing, http://code.google.com/p/zxing/, retrieved May[8] He, J., Do, Q. D. M., Downton, A. C., Kim, J. H. A Comparison of 19, 2011. Binarization Methods for Historical Archive Documents. Proceedings [29] Viard-Gaudin C., Normand N., and Barba D. Algorithm Using a of the 2005 Eight International Conference on Document Analysis and Two-Dimensional Approach. Proceedings of the Second Int. Conf. on Recognition, 29 Aug. – 1 Sept, Vol. 1, pp. 538 – 542, ISSN: 1520-5263, Document Analysis and Recognition, No. 20–22, pp. 45–48, October 2005. 1993.[9] Gallo, O., Manduchi, R. Reading Challenging Barcodes with Cam- [30] Kutiyanawala, A. and Kulyukin, V. An Eyes-Free Vision-Based UPC eras. Proceedings of Workshop on Applications of Computer Vision and MSI Barcode Localization and Decoding Algorithm for Mobile (WACV), pp. 1–6, Dec. 2009. Phones. Proceedings of Envision 2010, San Antonio, Texas.[10] Gharpure, C., Kulyukin, V. Robot-assisted Shopping for the [31] Ando S., and Hontanj H. Automatic Visual Searching and Reading of Blind: Issues in Spatial Cognition and Product Selection. Barcodes in 3-D Scene. Proceedings of the IEEE Int. Conf. on Vehicle Intelligent Service Robotics, Vol. 1: 237–251 (2008), Electronics, Vol. 25–28, pp. 49–54, September 2001. http://dx.doi.org/10.1007/s11370-008-0020-9. [32] Ando S. Image Field Categorization and Edge/Corner Detection[11] Hearst, M., Dumais, S., Osman, E., Platt, J., Scholkopf, B. Support from Gradient Covariance. IEEE Transactions on Pattern Analysis and vector machines. Intelligent Systems and their Applications, IEEE Machine Intelligence, Vol. 22, No. 2, pp. 179–190, February 2000. 13(4), pp. 18–28, 1998. [33] Ando S. Consistent Gradient Operators. IEEE Transactions on Pattern[12] Janaswami, K. ShopMobile: A Mobile Shopping Aid for Visually Analysis and Machine Intelligence, Vol. 22, No. 3, pp. 252–265, March Impaired Individuals. M.S. Report, Department of Computer Science, 2000. Utah State University, Logan, UT, 2010. [34] Muniz R., Junco L., and Otero A., A Robust Software Barcode Reader[13] Kulyukin, V., Gharpure, C., Pentico, C. Robots as using the Hough Transform. Proceedings of International Conference Interfaces to Haptic and Locomotor Spaces. Proceedings on Information Intelligence and Systems, No. 31, pp. 313–319, Novem- of the ACM/IEEE international conference on Human-robot ber 1999. interaction, pp. 325–331, ACM, New York, NY, USA, 2007, [35] Arnould S., Awcock G. J., and Thomas R., Remote Bar-code Lo- http://doi.acm.org/10.1145/1228716.1228760. calization Using Mathematical Morphology. Image Processing and its[14] Kulyukin, V., Kutiyanawala, A. Accessible Shopping Systems for Blind Applications, Vol. 2, No. 465, pp. 642–646, 1999. and Visually Impaired Individuals: Design Requirements and the State [36] The Zebra Crossing Barcode Decoding Library, of the Art. The Open Rehabilitation Journal 2, pp. 158–168, 2010, http://code.google.com/p/zxing/. http://dx.doi.org/10.2174/1874943701003010158. [37] Occipital, LLC. RedLaser, http://redlaser.com/.[15] Kulyukin, V., Kutiyanawala, A. Eyes-free Barcode Localization and [38] Tekin, E. and Coughlan, J.M., An Algorithm Enabling Blind Users to Decoding for Visually Impaired Mobile Phone Users. Proceedings of Find and Read Barcodes. WACV09, 2009. the 2010 International Conference on Image Processing, Computer [39] Kutiyanawala A., Qi X., Tian J. A Simple and Efficient Approach Vision, and Pattern Recognition, pp. 130–135. IPCV 2010, CSREA to Barcode Localization. 7th International Conference on Information, Press, 2010. Communication and Signal Processing, Macau 2009.[16] Kulyukin, V., Nicholson, J., Coster, D. Shoptalk: Toward Independent Shopping by People with Visual Impairments. Proceedings of the 10th international ACM SIGACCESS conference on computers and accessibility (Assets 2008), pp. 241–242, ACM, New York, NY, USA (2008), http://doi.acm.org/10.1145/1414471.1414518.[17] Nicholson J., Kulyukin V., Coster D. ShopTalk: Independent Blind Shopping Through Verbal Route Directions and Barcode Scans. The Open Rehabilitation Journal, ISSN: 1874-9437 Volume 2, pp. 11–22, DOI 10.2174/1874943700902010011, 2009.[18] Kulyukin, V., Gharpure, C. Ergonomics-for-one in a Robotic Shopping Cart for the Blind. Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction, pp. 142–149. ACM, New York, NY, USA, 2006, http://doi.acm.org/10.1145/1121241.1121267.[19] McCune, J., Perrig, A., Reiter, M. Seeing-is-believing: Using Camera Phones for Human-verifiable Authentication. IEEE Symposium on Security and Privacy, pp. 110 – 124, May 2005.[20] Merler, M., Galleguillos, C., Belongie, S. Recognizing Groceries in Situ using in Vitro Training Data. SLAM, Minneapolis, MN (2007).[21] Nicholson J. and Kulyukin V. ShopTalk: Independent Blind Shopping = Verbal Route Directions + Barcode Scans. Proceedings of the 30- th Annual Conference of the Rehabilitation Engineering and Assistive Technology Society of North America (RESNA 2007), June 2007, Phoenix, Arizona, Avail. on-line and on CD-ROM.