• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content

Loading…

Flash Player 9 (or above) is needed to view presentations.
We have detected that you do not have it on your computer. To install it, go here.

Like this presentation? Why not share!

talk

on

  • 437 views

 

Statistics

Views

Total Views
437
Views on SlideShare
437
Embed Views
0

Actions

Likes
0
Downloads
3
Comments
0

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Good Afternoon, everyone. My name is Howard Zhou and I am a graduate student from the school of interactive computing, Georgia Tech. The work I am going to present here is the joint work of Georgia Tech, Intel Research Pittsburgh and Department of Dermatology, University of Pittsburgh
  • According to the latest statistics available from the National Cancer Institute, skin cancer is the most common of all cancers in the United states. More than 1 million cases of skin cancer are diagnosed in the US each year. What’s shown here are some examples of skin lesion images. The four images shown on the left are various form of skin lesions, cancerous or non-cancerous. The two on the right are a specific form of skin cancer: melanoma.
  • Although represent only 4 percent of all skin cancers in the US, melanoma is the leading cause of mortality. They account for more than 75 percent of all skin cancer deaths.
  • The time line shown here is the 10 year survival rate of melanoma. If caught in its early stage, as seen here, melanoma can often be cured with a simple excision, so the patient have a high chance to recover. Hence, early detection of malignant melanoma significantly reduces mortality.
  • Dermoscopy is a noninvasive imaging technique, and it is just the right technique for this task. It has been shown effective for early detection of melanoma . The procedure involves using an incident light magnification system, i.e. a dermatoscope, to examine skin lesions. Often oil is applied at the skin-microscope interface. This allows the incident light to penetrate the top layer of the skin tissue and reveal the pigmented structures beyond what would be visible by naked eyes.
  • For dermatologists who have become experienced with dermoscopy, it has been shown to improve the diagnostic accuracy by as much as 30% over clinical examination. However, it may require as much as 5 year experience to have the necessary training. This is the motivation for computer-aided diagnosis in this area . In recent years, there has been increasing interest in computer-aided diagnosis of pigmented skin lesion from these dermoscopy images. In the future, with the development of new algorithms and techniques, these computer procedures may aid the dermatologists to bring medical break through in early detection of melanoma.
  • The procedures of computer aided analysis often starts with segmenting the pigmented lesions from the surrounding skin. This step not only provides a basis for calculation of important clinical features such as lesion size and border irregularity, but the resulting border is also crucial for the extraction of discriminating dermoscopic features such as atypical pigment networks and radial streaming. Over the years, researchers have developed many automated segmentation algorithm, and most had roots in image processing and computer vision techniques PDE approach [2], B. Erkol, R.H. Moss, R.J. Stanley, W.V. Stoecker, and E. Hvatum, “Automatic lesion boundary detection in dermoscopy images using gradient vector flow snakes,” Skin Research and Technology , vol. 11, pp. 17–26, 2005 histogram thresholding [3], M. Hintz-Madsen, L. K. Hansen, J. Larsen, and K. Drzewiecki, “A probabilistic neural network framework for detection of malignant melanoma,” in Artificial Neural Networks in Cancer Diagnosis, Prognosis and Patient Management , pp. 141–183. 2001. clustering [4], P. Schmid, “Segmentation of digitized dermatoscopic images by two-dimensional color clustering,” IEEE Trans. Medical Imaging , vol. 18, pp. 164–171, 1999. R. Melli, C. Grana, and R. Cucchiara, “Comparison of color clustering algorithms for segmentation of dermatological images,” in SPIE Medical Imaging , 2006. statistical region merging [5]. M.E. Celebi, H.A. Kingravi, H. Iyatomi, J. Lee, Y.A. Aslandogan, W.V. Stoecker, R. Moss, J.M. Malters, and A.A. Marghoob, “Fast and accurate border detection in dermoscopy images using statistical region merging,” in SPIE Medical Imaging , 2007. Researchers have also explored different color spaces, e.g., RGB , HSI , CIELUV [4], and CIELAB to improve segmentation performance. However, due to the large variability in lesion properties and skin conditions, along with the presence of artifacts such as hair and air bubbles, automated segmentation of pigmented lesions in dermoscopy images remains a challenge.
  • Because we are working on the dermoscopy images instead of general ones, many algorithms explore domain-specific constraints to pigmented skin lesion. There are algorithms enforcing explicit spatial constraints to simplify the figure/ground label assignment such as Melli et al.’s [6] and Celebi et al. [5]’s work. Some methods such as the meanshift algorithm implicitly enforces local neighborhood constraints on image Cartesian coordinates during pixel grouping.
  • Some methods such as the meanshift algorithm implicitly enforces local neighborhood constraints on image Cartesian coordinates during pixel grouping.
  • In our work, we try to explore spatial constraints arise from the growth pattern of pigmented skin lesions
  • More specifically, the kind of radiating appearance exhibit on the skin surface.
  • We show that by embedding such constraints as polar coordinates into pixel grouping stage, we can considerably improve the segmentation performance. As seen in this side-by-side comparison of pixel grouping, embedding the radiating constraints made the grouping less blocky compared to the Meanshift algorithm.
  • And the final lesion boundary improves accordingly. Less prone to error in the figure/ground assignment Less localization error at the boundary.
  • When compared to the lesion boundaries manually outlined by dermatologists, our methods produce better computer counterparts.
  • So what gives pigmented skin lesion such commen constraints, this radiating appearance that we can take advantage of? We will take a look at the reason “more than skin deep” (pun intended  )
  • Melanoma arises from pigmented skin cells ( melanocytes ) and is commonly identified with two growth phases, radial and vertical. Since skin absorbs and scatters light, the appearance of these pigmented cells varies with depth. As a result, they appear as dark brown within the epidermis, tan at or near the dermoepidermal junction, and blue-gray in the dermis. This results a common ringing effect (or radiating appearance pattern) seen on the skin surface.
  • Much like tree rings, although much less regular, but still, this radiating pattern seen from the skin surface dictates that in general, the difference in lesion appearance is more significant along the radial direction from the lesion center than any other direction.
  • As shown here, notice that the skin patch in the cyan square bears more resemblance to the patch in the red square than the yellow one, even though the yellow square is much closer to the cyan square in Euclidean distance. In the pixel grouping stage of algorithms like meanshift, the pixels are grouped not only according to their appearance, such as R, G, B in color space, but also grouped with respect to their location measured in Euclidean distance. However, because of the radiating appearance exhibit in dermoscopy images, a direct embedding of Cartesian coordinates may not be optimal. For example, in Cartesian coordinates, the red square in Fig. 1(b) is farther away from the cyan square than the yellow one. Consequently, it is less likely to be grouped together with the cyan square, even though their underlying pigmented cells are probably in the same growth period . Given the fact that dermatologists tend to put the lesion near the center of image frame while acquiring dermoscopy images, we can better capture the radial growth pattern by replacing the x, y coordinates with a polar radius r measured from the image center.
  • Hence, every pixel becomes a feature vector in R4, with 3 appearance component from the color space, and 1 from the radius measured from the center of the image.
  • We then group these feature vectors according to their distance in the feature space (in this case R4). After replacing each pixel with its respective cluster center, we then have a filtered version of the dermoscopy image (on the right), a more compact representation.
  • In order to demonstrate that radiating pattern is common among dermoscopy images and not common in general image datasets. We perform a little experiment. On the left, we have a total of 216 dermoscopy images and on the right, 300 natural images from the Berkeley segmentation dataset. What we want to show is that the radiating pattern is common among the dermoscopy images, but natural images do not exhibit this pattern.
  • What we do here is filter each image using k-means algorithm to come up with a filtered image while taking Cartesian (x, y) and polar radius as additional components to the feature vector, respectively The mean per-pixel residue, or the average per-pixel color difference of each original-filtered pair is then calculated.
  • We use k-means++ algorithm with k being 30 for this clustering task. The graphs here show a comparison of the residues, x coordinates when polar radius is used and y coordinate axis shows the Cartesian case, if the test image is above the diagonal line, then the polar residue is smaller than Cartesian and vice versa. When Dermsocopy images are used. The fact that almost all points are above the equal residue line indicate that embedding polar coordinates as spatial constraints can reduse clustering error. This is much less significant in the natural image case since they are much evenly spread about the diagonal line.
  • Quantitatively, for the dermoscopy image dataset, the mean residue is reduced by 18% by switching from Cartesian to polar radius. In contrast, the difference is much less for the natural images in BSD.
  • If we take a look at the 30 clusters generated during this comparison, we can actually see that regions appear less blocky (and more intuitive) when radiating constraints were embedded as polar radius
  • This in turn will result better final segmentation result.
  • The result from our verification is consistent with the comparison with the Meanshift algorithm, where spatial constraints are also enforced in Cartesian space.
  • T he regions appear more blocky when the clustering is performed in Cartesian space. This unnatural appearance is not present when polar radius is used
  • With this observation, we explicitly enforce the radiating constraints by embedding the polar radius into feature vectors. This gives us a novel unsupervised algorithm for segmentation of pigmented skin lesion.
  • The first clustering stage serves two purposes. First, it removes small variations in appearance due to noise. Second, it groups pixels into homogenous regions; the color and location values of each pixel are replaced with the average values of the region to which they belong. The regions resulting from this operation give us a more compact representation of the original image. We use the k-means++ algorithm [7]. It is a variation of the k-means algorithm that improves both speed and accuracy by using a randomized seeding technique. The input to k-means++ is all the image pixels represented as points in R4, where the coordinates of each point encodes the color and location of the corresponding pixel. We convert the pixels from RGB to L*a*b* values because the CIELAB space is more perceptually uniform . The fourth coordinate is the polar radius measured from the image center encoding the location of each pixel. We normalize this coordinate with a constant w to make the polar radius commensurate with the L*a*b* color values. We chose the number of clusters k as 30 such that the clusters are able to represent the image compactly without incurring large residue errors [6]. This first round of clustering serves the same noise reducing purpose as the median filtering used in some previous techniques, but reduce s boundar y localization errors introduced by the smoothing process.
  • After the first stage of clustering, the mean values of the clusters are fed into the next round of k-means++ clustering to produce super-regions , as defined previously. We choose the number of clusters k by satisfying the following two requirements. First, it must account for intra-skin and intralesion variations. For example, the appearance of the lesion in Fig. 4(a) varies significantly across lesion, and an attempt to produce a single lesion cluster is unlikely to succeed. Second, we want to avoid a value that is too large making the subsequent combinatoric region merge (see next section) intractable. Based on our experiments and previous studies [6], we set k to 6, which produces satisfactory results. As shown in Figs. 4(d), 4(h), 4(f), and 4(j), the super-regions do correspond to meaningful regions such as skin, skin-lesion transition, and inner lesion.
  • The first clustering stage serves two purposes. First, it removes small variations in appearance due to noise. Second, it groups pixels into homogenous regions; the color and location values of each pixel are replaced with the average values of the region to which they belong. The regions resulting from this operation give us a more compact representation of the original image. We use the k-means++ algorithm [7]. It is a variation of the k-means algorithm that improves both speed and accuracy by using a randomized seeding technique. The input to k-means++ is all the image pixels represented as points in R4, where the coordinates of each point encodes the color and location of the corresponding pixel. We convert the pixels from RGB to L*a*b* values because the CIELAB space is more perceptually uniform . The fourth coordinate is the polar radius measured from the image center encoding the location of each pixel. We normalize this coordinate with a constant w to make the polar radius commensurate with the L*a*b* color values. We chose the number of clusters k as 30 such that the clusters are able to represent the image compactly without incurring large residue errors [6]. This first round of clustering serves the same noisereducing purpose as the median filtering used in [4] and [5], but avoids boundary
  • The first clustering stage serves two purposes. First, it removes small variations in appearance due to noise. Second, it groups pixels into homogenous regions; the color and location values of each pixel are replaced with the average values of the region to which they belong. The regions resulting from this operation give us a more compact representation of the original image. We use the k-means++ algorithm [7]. It is a variation of the k-means algorithm that improves both speed and accuracy by using a randomized seeding technique. The input to k-means++ is all the image pixels represented as points in R4, where the coordinates of each point encodes the color and location of the corresponding pixel. We convert the pixels from RGB to L*a*b* values because the CIELAB space is more perceptually uniform . The fourth coordinate is the polar radius measured from the image center encoding the location of each pixel. We normalize this coordinate with a constant w to make the polar radius commensurate with the L*a*b* color values. We chose the number of clusters k as 30 such that the clusters are able to represent the image compactly without incurring large residue errors [6]. This first round of clustering serves the same noisereducing purpose as the median filtering used in [4] and [5], but avoids boundary
  • We choose the number of clusters k by satisfying the following two requirements. First, it must account for intra-skin and intralesion variations. For example, the appearance of the lesion in Fig. 4(a) varies significantly across lesion, and an attempt to produce a single lesion cluster is unlikely to succeed. Second, we want to avoid a value that is too large making the subsequent combinatoric region merge (see next section) intractable. Based on our experiments and previous studies [6], we set k to 6, which produces satisfactory results. As shown in Figs. 4(d), 4(h), 4(f), and 4(j), the super-regions do correspond to meaningful regions such as skin, skin-lesion transition, and inner lesion.
  • We choose the number of clusters k by satisfying the following two requirements. First, it must account for intra-skin and intralesion variations. For example, the appearance of the lesion in Fig. 4(a) varies significantly across lesion, and an attempt to produce a single lesion cluster is unlikely to succeed. Second, we want to avoid a value that is too large making the subsequent combinatoric region merge (see next section) intractable. Based on our experiments and previous studies [6], we set k to 6, which produces satisfactory results. As shown in Figs. 4(d), 4(h), 4(f), and 4(j), the super-regions do correspond to meaningful regions such as skin, skin-lesion transition, and inner lesion.
  • After the super-regions are identified, we apply a region merge procedure to produce a plausible lesion segmentation. However, for many cases merging based on color cues alone is insufficient. For instance, on severely sun damaged skin (Fig. 4(i)), texture variations are often more informative than color. Moreover, many lesions exhibit texture variations at boundaries in addition to color variations. For these cases, incorporating t
  • After the super-regions are identified, we apply a region merge procedure to produce a plausible lesion segmentation. However, for many cases merging based on color cues alone is insufficient. For instance, on severely sun damaged skin (Fig. 4(i)), texture variations are often more informative than color. Moreover, many lesions exhibit texture variations at boundaries in addition to color variations. For these cases, incorporating t
  • After the super-regions are identified, we apply a region merge procedure to produce a plausible lesion segmentation. However, for many cases merging based on color cues alone is insufficient. For instance, on severely sun damaged skin (Fig. 4(i)), texture variations are often more informative than color. Moreover, many lesions exhibit texture variations at boundaries in addition to color variations. For these cases, incorporating t
  • U niformly sample the original values of the pixels within each super-region and compute an Earth Mover’s Distance ( EMD ) between every pair of super-regions. Skin-lesion boundary: curve that separates the set of super-regions inside lesion from the rest Optimal boundary is found by minimizing the integrated color-texture measure as follows where Si and Sj (i, j = k, l, m, n 2 {1, 2, . . . , 6}) are a pair of super-regions, Ci,j is the EMD between them, T is the normalized texture gradient on the lesion boundary. lambda is a constant that can be adjusted to put emphasis on either color or texture.
  • U niformly sample the original values of the pixels within each super-region and compute an Earth Mover’s Distance ( EMD ) between every pair of super-regions. Skin-lesion boundary: curve that separates the set of super-regions inside lesion from the rest Optimal boundary is found by minimizing the integrated color-texture measure as follows where Si and Sj (i, j = k, l, m, n 2 {1, 2, . . . , 6}) are a pair of super-regions, Ci,j is the EMD between them, T is the normalized texture gradient on the lesion boundary. lambda is a constant that can be adjusted to put emphasis on either color or texture.
  • U niformly sample the original values of the pixels within each super-region and compute an Earth Mover’s Distance ( EMD ) between every pair of super-regions. Skin-lesion boundary: curve that separates the set of super-regions inside lesion from the rest Optimal boundary is found by minimizing the integrated color-texture measure as follows where Si and Sj (i, j = k, l, m, n 2 {1, 2, . . . , 6}) are a pair of super-regions, Ci,j is the EMD between them, T is the normalized texture gradient on the lesion boundary. lambda is a constant that can be adjusted to put emphasis on either color or texture.
  • We have our collaborating dermatologist Dr. Ferris manually outline the lesions in 67 dermoscopy images, and treat that as groundtruth. We compare them to the automatically generated borders using the grading system in [5]. The border error is given by error = Area( computer XOR ground-truth ) Area( ground-truth ) × 100%, where computer is the binary image obtained by filling the automatic detected border and ground-truth is obtained by filling in the boundaries outlined by our dermatologist.
  • To account for inter-operator variation, we also asked Dr. Alex Zhang to manually outline boundaries on the same dataset
  • Fig. 3 shows a performance comparison using this error measure. We can see that while both colorspace conversion (from RGB to L*a*b* ) and color-texture integration improved segmentation accuracy, the biggest boost came from embedding the spatial constraints during clustering.
  • Typical segmentation results are shown in Fig. 4(d) - 4(k). The first image of each pair shows the super-regions while the second one shows the lesions. Both the computer generated borders (in blue) and the dermatologist’s (in white) are overlayed on top of the original images for comparison purposes
  • Future work Exemplar-based inpainting procedure is much more time-consuming (around 150 seconds on an average image with hair 600x450) compared to other inpainting methods (under 1 sec). This makes our system less suitable for interactive application. In the future, we will look into how to speed up the patch searching step. Currently, we only detect “darker” hair. However, there are many possible hair colors possible (white, blond), where curvilinearness is the only safe assumption for hair detection, we plan to look into detecting these cases. Besides hair and ruler markings, air bubbles inevitably appear in many dermoscopy images. Our frame work may be extended for air bubble removal.
  • Fig. 3 shows a performance comparison using this error measure. We can see that while both colorspace conversion (from RGB to L*a*b* ) and color-texture integration improved segmentation accuracy, the biggest boost came from embedding the spatial constraints during clustering.
  • In order to measure the amount of texture variations across regions, we apply the texture gradient filter ( TG ) [8] to the original dermoscopy images. The resulting images are pseudo-likelihood maps, which contain the information of how likely an edge caused by texture variation is at a certain location.

talk talk Presentation Transcript

  • Spatially Constrained Segmentation of Dermoscopy Images Howard Zhou 1 , Mei Chen 2 , Le Zou 2 , Richard Gass 2 , Laura Ferris 3 , Laura Drogowski 3 , James M. Rehg 1
      • 1 School of Interactive Computing, Georgia Tech
      • 2 Intel Research Pittsburgh
      • 3 Department of Dermatology, University of Pittsburgh
  • Skin cancer and melanoma
    • Skin cancer : most common of all cancers
    [ Image courtesy of “An Atlas of Surface Microscopy of Pigmented Skin Lesions: Dermoscopy” ]
  • Skin cancer and melanoma
    • Skin cancer : most common of all cancers
    • Melanoma : leading cause of mortality (75%)
    [ Image courtesy of “An Atlas of Surface Microscopy of Pigmented Skin Lesions: Dermoscopy” ]
  • Skin cancer and melanoma
    • Skin cancer : most common of all cancers
    • Melanoma : leading cause of mortality (75%)
    • Early detection significantly reduces mortality
    [ Image courtesy of “An Atlas of Surface Microscopy of Pigmented Skin Lesions: Dermoscopy” ]
  • Clinical View [ Image courtesy of “An Atlas of Surface Microscopy of Pigmented Skin Lesions: Dermoscopy” ] Dermoscopy view
  • Dermoscopy
    • I mprove diagnostic accuracy by 30% in the hands of trained physicians
    • May require as much as 5 year experience to have the necessary training
    • Motivation for Computer-aided diagnosis (CAD) in this area
    Clinical view Dermoscopy view
  • First step of analysis: Segmentation
    • S e parating lesions from surrounding skin
    • Resulting border
      • Gives lesion size and border irregularity
      • Crucial to the extraction of dermoscopic features for diagnosis
    • Previous Work :
      • PDE approach – Erkol et al . 2005, …
      • Histogram thresholding – Hintz-Madsen et al . 2001, …
      • Clustering – Schmid 1999, Melli et al . 2006…
      • Statistical region merging – Celebi et al . 2007, …
  • Domain specific constraints
    • Spatial constraints
      • Four corners are skin (Melli et al. 2006, Celebi et al. 2007)
      • Implicitly enforcing Local neighborhood constraints on image Cartesian coordinates (Meanshift)
  • Domain specific constraints
    • Spatial constraints
      • Four corners are skin (Melli et al. 2006, Celebi et al. 2007)
      • Implicitly enforcing Local neighborhood constraints on image Cartesian coordinates (Meanshift)
    Meanshift (c = 32, s = 8)
  • We explore …
    • Spatial constraints a rise from the growth pattern of pigmented skin lesions
    Meanshift (c = 32, s = 8)
  • We explore …
    • Spatial constraints a rise from the growth pattern of pigmented skin lesions – radiating pattern
    Meanshift (c = 32, s = 8)
  • Embedding constraints
    • Radiating pattern from lesion growth
    • Embedding constraints as polar coords improves segmentation performance
    Polar (k = 6) Meanshift (c = 32, s = 8)
  • Embedding constraints
    • Radiating pattern from lesion growth
    • Embedding constraints as polar coords improves segmentation performance
    Polar (k = 6) Meanshift Polar
  • Comparison to the Doctors
    • Radiating pattern from lesion growth
    • Embedding constraints as polar coords improves segmentation performance
    Meanshift Polar White: Dr. Ferris Red : Dr. Zhang Blue : c ompute r
  • Dermoscopy images Common radiating appearance
  • Growth pattern of pigmented skin lesions
    • lesions grow in both radial and vertical direction
    • Skin absorbs and scatters light.
    • Appearance of pigmented cells varies with depth
      • Dark brown  tan  blue-gray
    • Common radiating appearance pattern on skin surface
    [ Image courtesy of “Dermoscopy : An Atlas of Surface Microscopy of Pigmented Skin Lesions]
  • Radiating growth pattern on skin surface
    • Difference in appearance: more significant along the radial direction than any other direction.
  • Radiating growth pattern on skin surface
    • Difference in appearance: more significant along the radial direction than any other direction.
    • Each pixel  feature vector in R 4
      • 3D: R,G,B or L, a, b in the color space
      • 1D: polar radius measured from the center of the image (normalized by w)
    Embedding spatial constraints Feature vectors original r {R, G, B}
    • Each pixel  feature vector in R 4
    • Clustering pixels in the feature space
    • Replace pixels with mean for compact representation
    Embedding spatial constraints Grouping features filtered original r {R, G, B}
  • Radiating pattern Dermoscopy vs. natural images
    • Derm dataset (216)
    BSD dataset (300) … …
    • M ean per-pixel residue : average per-pixel color difference of each pair
    Embedding spatial constraints Grouping features original {R o , G o , B o } polar {R p , G p , B p } Cartesian {R c , G c , B c }
  • Dermoscopy vs. natural images Polar vs. Cartesion
    • M ean per-pixel residue (k-means++, k = 30)
    BSD dataset (300) Residue (polar) Residue (Cartesian) Derm dataset (216) Residue (Cartesian) Residue (polar)
  • Dermoscopy vs. natural images Polar vs. Cartesion
    • M ean per-pixel residue (k-means++, k = 30)
  • Polar vs. Cartesian
    • T he regions appear more blocky in the Cartesian case
    Polar (k = 30) Cartesian (k = 30)
  • Six super-regions
    • 30 clusters  6 super clusters (K-means++)
    Polar (k = 6) Cartesian (k = 6)
  • Final segmentation Polar Cartesian
  • Polar vs. Meanshift
    • T he regions appear more blocky in the Meanshift case
    Polar (k = 6) Meanshift (c = 32, s = 8)
  • Final segmentation Polar Meanshift
    • Given a dermoscopy image
    Algorithm overview
    • Given a dermoscopy image
    Algorithm overview original
    • 1. First round clustering: K-means++ (k = 30)
    Algorithm overview original 30 clusters
    • 2. Second round: clusters(30)  super-regions(6)
    Algorithm overview original 30 clusters 6 Super-regions
    • 3. Apply texture gradient filter (Martin, et al. 2004)
    Algorithm overview original 30 clusters 6 Super-regions Texture edge map
    • 4. Find optimal boundary (color+texture)
    Algorithm overview original 30 clusters 6 Super-regions Texture edge map Final segmentation
    • First round clustering: K-means++ (k = 30)
      • Reduce noise
      • Groups pixels into homogenous regions – a more compact representation of the image
      • Artuhur and Vassilvitskii, 2007
    • R 4 : {L*a*b* (3D), w * polar radius (1D)}
    1. First round clustering original
    • First round clustering: K-means++ (k = 30)
      • Reduce noise
      • Groups pixels into homogenous regions – a more compact representation of the image
      • Artuhur and Vassilvitskii, 2007
    • R 4 : {L*a*b* (3D), w * polar radius (1D)}
    1. First round clustering original 30 clusters
    • K = 6 : clusters(30)  super-regions(6)
      • Account for intra-skin and intra-lesion variations
      • Avoid a large k
    • Super-regions correspond to meaningful regions such as skin, skin-lesion transition, and inner lesion, etc.
    2. Second round clustering original 30 clusters
    • K = 6 : clusters(30)  super-regions(6)
      • Account for intra-skin and intra-lesion variations
      • Avoid a large k
    • Super-regions correspond to meaningful regions such as skin, skin-lesion transition, and inner lesion, etc.
    2. Second round clustering original 30 clusters 6 super-regions
  • 3. C olor - texture integration
    • Incorporating texture information can improve segmentation performance.
      • Severely sun damaged skin; texture variations at boundaries in addition to color variations
    original
  • 3. C olor - texture integration
    • Incorporating texture information can improve segmentation performance.
      • Severely sun damaged skin; texture variations at boundaries in addition to color variations
    • Apply t exture gradient filter (Martin, et al. 2004)
    original
  • 3. C olor - texture integration
    • Incorporating texture information can improve segmentation performance.
      • Severely sun damaged skin; texture variations at boundaries in addition to color variations
    • Apply t exture gradient filter (Martin, et al. 2004)
    • Texture edge map: p seudo-likelihoo d
    original Texture edge map
    • O ptimal skin-lesion boundary
      • Color: Earth Mover’s Distance ( EMD ) between every pair of super-region s
    4. Optimal boundary 6 super-regions
    • O ptimal skin-lesion boundary
      • Color: Earth Mover’s Distance ( EMD ) between every pair of super-region s
      • Texture: Texture edge map
    4. Optimal boundary Texture edge map 6 super-regions
    • O ptimal skin-lesion boundary
      • Color: Earth Mover’s Distance ( EMD ) between every pair of super-region s
      • Texture: Texture edge map
      • M inimizing the integrated color-texture measure
    4. Optimal boundary Texture edge map 6 super-regions
    • Our collaborating d ermatologist Dr. Ferris manually outline the lesions in 67 dermoscopy images
    • The border error is given by
    • Computer : binary image obtained by filling the automatic detected border
    • ground-truth : obtained by filling in the boundaries outlined by Dr. Ferris
    Validation and results
  • Typical segmentation result Error = 12.96% White: Dr. Ferris Red : Dr. Zhang Blue : c ompute r
  • Comparison To account for inter-operator variation, we also asked Dr. Alex Zhang to manually outline boundaries on the same dataset
  • Additional results Error = 5.80% White: Dr. Ferris Red : Dr. Zhang Blue : c ompute r
  • Additional results Error = 13.61 % White: Dr. Ferris Red : Dr. Zhang Blue : c ompute r
  • Additional results Error = 16.60 % White: Dr. Ferris Red : Dr. Zhang Blue : c ompute r
  • Additional results Error = 34.09 % White: Dr. Ferris Red : Dr. Zhang Blue : c ompute r
  • Limitation
    • Assumption that lesions appear relatively near the center may not hold
    • Fairly low number of super regions (6) may limit the algorithm to perform well on lesions with more colors
  • Conclusion
    • G rowth pattern of pigmented skin lesions can be used to improve lesion segmentation accuracy in dermoscopy images.
    • A n unsupervised segmentation algorithm incorporating these spatial constraints
    • W e demonstrate its efficacy by comparing the segmentation results to ground-truth segmentations determined by an expert.
  • Future work
    • Extend to meanshift?
  • Comparison to other methods
  • Color and texture cue integration
    • Apply t exture gradient filter (Martin, et al. 2004)
    • P seudo-likelihood map - edge caused by texture variation is present at a certain location