Your SlideShare is downloading. ×
Image color correction and contrast enhancement
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Saving this for later?

Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

Text the download link to your phone

Standard text messaging rates apply

Image color correction and contrast enhancement

15,062
views

Published on

histogram equalization, gamma correction, bilateral filter, nonlocal means, bm3d, unsharp masking, stretching, Retinex model, automatic color enhancement, multiscale decomposition based detail …

histogram equalization, gamma correction, bilateral filter, nonlocal means, bm3d, unsharp masking, stretching, Retinex model, automatic color enhancement, multiscale decomposition based detail manipulation, illumination, reflectance, albedo, dehazing, defoggy, DCT scaling, alpha rooting, tone mapping, inverse tone mapping, HDR, LDR, deblur, matlab, opencv.

Published in: Technology, Art & Photos, Business

3 Comments
10 Likes
Statistics
Notes
No Downloads
Views
Total Views
15,062
On Slideshare
0
From Embeds
0
Number of Embeds
18
Actions
Shares
0
Downloads
394
Comments
3
Likes
10
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Image Color Correction and Contrast Enhancement Yu Huang Sunnyvale, California yu.huang07@gmail.com
  • 2. Outline  Image enhancement  Histogram equalization (HE)  CLAHE (Contrast Limited Adaptive HE)  Partitioned HE  HE with mean brightness preservation  Gamma correction (GC)  GC with nonlinear masking  Denoising:  Bilateral, Anisotropic diffusion;  Nonlocal Means (NLM), BM3D;  Joint Deblurring and Denoising;  Histogram-based stretching  Auto-color equalizat. (ACE)  Scaling the DCT coefficients  DCT histogram shifting and alpha rooting  Unsharp masking  Multi-scale decomposition-based detail enhancement  High dynamic range (HDR) images by tone mapping  Inverse tone mapping: from LDR to HDR;  App. A: Shad.+Reflect. Decomp.  App. B: Radiance+Airlight Decomp.  App. C: Learning-based Enhancing
  • 3. Image Enhancement  Spatial domain  Global  Gamma correction  Histogram equalization  Stretching in the specified interval  Local  Gamma correction with nonlinear masking  Contrast limited adaptive histogram equalization  Edge-based sharpening: unsharp masking  Denoising: bilateral filtering  Frequency domain  DCT domain: scaling, alpha-rooting;  Homomorphic filter: image decomposition.  Retinex model: color constancy  Separation of luminance component with reflectance component;  Emphasis of reflectance component;  Recombine luminance and reflectance.  Tone mapping: between HDR images and LDR images
  • 4. Histogram Equalization  When the usable data of the image is represented by close contrast values;  Spread out the most freq. intensity values.
  • 5. CLAHE (Contrast Limited Adaptive HE)  Contrast limiting applied for each neighbor from which a transformation function is derived:  Proportional to the cumulative distribution function (CDF) of pixel values;  Contrast limited: clipping histogram at a predefined value before computing CDF.  Efficient computation by interpolation: The image is partitioned into equally sized rectangular tiles. redistribution
  • 6. CLAHE Results
  • 7. Partitioned HE  Static partitioned HE: still use the original dynamic range  Brightness preserving bi-HE (BBHE) [Kim’97]: divide histogram based on mean brightness and then HE for each one;  Dualistic sub-image HE (DSIHE) [Wan’99]: median as the separation point;  Minimum mean brightness error bi-HE (MMBEBHE) [Chen’03]: separation point based on minimum mean brightness error;  Recursive mean-separate HE (RMSHE) [Chen’03]: recursively split into multi sub-histograms (initially from two), based on mean;  Recursive sub-image HE (RSIHE)[Sim’07]: recursively split histogram into more sub-histograms, based on median;  Bi-HE plateau limit [Ooi’09]: clipping based on average number of intensity occurrence.  Dynamic partitioned HE: employ the enhanced dynamic range  Dynamic HE [Wadud’07]: partition histogram based on local minima, and new dynamic range based on pixel number;  Brightness preserving dynamic HE (BPDHE) [Irahim’07]: partition with local maxima, brightness normalization after HE.
  • 8. Comparison of HE, BBHE & RMSHE Original HE BBHE RMSHE
  • 9. Histogram Modification with Mean- Brightness Preservation  Weight and threshold before HE (WTHE) [Wang&Ward’07];  Gray-level grouping: group histogram bins and redistribute groups iteratively[Chen’06];  Histogram modification as an optimization problem to adapt the enhancement level;  Linear black and white (B&W) stretching: Decrease histogram bin length for dark and bright end  Histogram smoothing for spikes: Backward-difference of histogram as measure of smoothness  Weighted histogram approximation: Average local variance of all the pixels with the same gray-level is used to weight the approximation error, so that avoid spikes further. B&W stretching Modification of histogram Measuring input contrast Limitation of very low slope
  • 10. Comparison Results Original image HE Weighted thresholded HE HE mean brightness preservation
  • 11. Gamma Correction  Power-Law Transformations:  Gamma compression or expansion.  CRT intensity-to-voltage response follows a power function. original 0.5<gama<1 gama >1 gama <0.5
  • 12. Gamma Correction with Masking
  • 13. Bilateral Filter: Edge Preserving
  • 14. Anisotropic Diffusion Filter  c(p, t) is large when p is not a part of an edge  c(p, t) is small when p is a part of an edge
  • 15. Nonlocal Means  Patch-based (not pixel-based like bilateral filter);  Averaging with nearby pixels of similar texture;  Pixel-wised  Patch-wised  More discriminative, still blurry in finer details.  Note: Intel OpenCV implements NLM for denoising.
  • 16. NLM Results
  • 17. BM3D (Block Matching 3-D) Filter • For each patch, find similar patches in neighboring; • Group the similar patches into a 3-d stack; • SSD, SAD or kernel-based. • Perform a 3-D transform (approximated by 2-D + 1-D) and coeff. thresholding (sparsity in transform domain); • DCT, WT, Walsh-Hadamard transform,… • Apply inverse 3-D transform (1-D + 2-D); • Calculate the thresholded pixels for weighting; • Also combine multiple patches in a collaborative way (aggregation); • Two stages: hard -> wiener (soft). • Extension to color domain: • YCbCr space; • Grouping only uses Y component, but apply for Cb, Cr components.
  • 18. BM3D Flowchart
  • 19. BM3D Results
  • 20. Sparse Coding in Image Filtering • A cost function for : Y = Z + n • Solve for: Prior term • Break problem into smaller problems • Aim at minimization at the patch level. Proximity of selected patch Sparsity of the representations Global proximity
  • 21. K-SVD Dictionary Learning  Extract overlapping patches from a single image;  clean or corrupted, even reference (multiple frames)?  for example, 100k of size 8x8 block patches;  Applied the K-SVD, training a dictionary;  Size of 64x256 (n=64, dictionary size k).  Lagrange multiplier namda = 30/sigma of noise;  The coefficients from Orthogonal Matching Pursuit;  the maximal iteration is 180 and noise gain C=1.15;  the number of nonzero elements L=6 (sigma=5).  Denoising by normalized weighted averaging:
  • 22. Image Denoising by Conv. Nets  Image denoising is a learning problem to training Conv. Net;  Parameter estimation to minimize the reconstruction error.  Online learning (rather than batch learning): stochastic gradient  Gradient update from 6x6 patches sampled from 6 different training images  Run like greedy layer-wise training for each layer.
  • 23. Image Denoising by MLP  Denoising as learning: map noisy patches to noise-free ones;  Patch size 17x17;  Training with different noise types and levels:  Sigma=25; noise as Gaussian, stripe, salt-and-pepper, coding artifact;  Feed-forward NN: MLP;  input layer 289-d, four hidden layers (2047-d), output layer 289-d.  input layer 169-d, four hidden layers (511-d), output layer 169-d.  40 million training images from LabelMe and Berkeley segmentation!  1000 testing images: Mcgill, Pascal VOC 2007;  GPU: slower than BM3D, much faster than KSVD.  Deep learning can help: unsupervised learning from unlabelled data.
  • 24. Image Deblur with Denoising  Motion blur (camera or object): degradation by convolution of a latent image with a blur kernel during exposure;  Averaging of unaligned images along the motion trajectory;  Deblurr is an inverse problem: estimate point spread function, i.e. PSF;  Multiple images or single image:  [Yuan et al., 2007]: Noisy/Blurred image pairs, kernel estimated from noisy first;  Hardware-based: hybrid imaging (camera motion), coded aperture (blur kernel);  Presence of noise is a big problem (how to detect blur and noise?);  Ringing artifacts or amplification of noise in deblurring.  Non-blind deconvolution for single image deblur (kernel known):  Lucy-Richardson, Wiener filter, LS, TV (total variation) etc.;  [Yuan et al. 2008]: multiscale LR with bilateral filter;  [Chan & Wong, 2009]: Laplacian prior as Total Variation regularizer in LR;  [Xu & Jia, 2010]: spatial prior for blurred edge scale for TV regularization;  Blind deconvolution (BD) for single image deblur (no kernel clue)  Ill-posed, only solved by assumptions or priors;  Smoothness, gradient or color priors (the two-color model) for MAP or regularization;
  • 25. Image Deblur with Denoising  Spatially invariant: uniform BD  [Fergus et al. 2006]: variational Bayesian method with gradient prior from natural image statistics;  [Jia 2007]: alpha matte for kernel estimation;  [Shan et al. 2008]: regularization with high order partial derivatives;  [Cho & Lee, 2009]: edge prediction with a shock filter for kernel estimation;  [Levin et al., 2011]: efficient marginal likelihood maximization;  [Krishnan et al., 2011]: kernel estimation with normalized sparsity (L1 by L2);  Spatially variant: non-uniform BD  [Shan et al., 2007]: in-plane rotation estimation for deblur with iterative optimization;  [Whyte et al., 2010]: variation Bayesian with a geometric model for rotation;  [Gupta et al., 2010]: motion density basis for kernel in RANSAC-based optimization.  Optical aberration: lens imperfection  [Schuler et al., 2012]: optic blur with a set of bases.  De-blur and de-noise together:  How to separate noise from blur?  [Joshi et al., 2009]: two color model with Gaussian mixture for BD;  [Tsai & Lin, 2012]:; denoising first with noise estimation in blur kernel;  [Zhong et al., 2013]: directional low pass filter.
  • 26. Deblur Results Input [Fergus et al. 2006] [Joshi et al., 2008] [Cho & Lee, 2009] [Xu & Jia, 2010] Input [Shan et al. 2008] [Krishna et al., 2011] [Cho & Lee, 2009]
  • 27. Deblur and Denoising Results
  • 28. Non-Blind Deconvolution for Deblurring Blurred + Noise Basic Total Variation (TV) Laplacian TV Bilateral TV Bilateral Laplacian TV
  • 29. Non-Blind Deconvolution for Deblurring Blurred + Noise Basic Total Variation (TV) Laplacian TV Bilateral TV Bilateral Laplacian TV
  • 30. Blind Deconvolution for Deblurring kernel estimation with normalized sparsity kernel estimation with normalized sparsity
  • 31. Blind Deconvolution for Deblurring kernel estimation with normalized sparsity
  • 32. Image Deconvolution with Deep CNN  Establish the connection between traditional optimization-based schemes and a CNN architecture;  A separable structure is used as a reliable support for robust deconvolution against artifacts;  The deconvolution task can be approximated by a convolutional network by nature, based on the kernel separability theorem;  Kernel separability is achieved via SVD;  An inverse kernel with length 100 is enough for plausible deconv. results;  Image deconvolution convolutional neural network (DCNN);  Two hidden layers: h1 is 38 large-scale 1-d kernels of size 121×1, and h2 is 381x121 convolution kernels to each in h1, output is 1×1×38 kernel;  Random-weight initialization or from the separable kernel inversion;  Concatenation of deconvolution CNN module with denoising CNN;  called “Outlier-rejection Deconvolution CNN (ODCNN)”;  2 million sharp patches together with their blurred versions in training.
  • 33. Image Deconvolution with Deep CNN
  • 34. Histogram-based Stretching  Saturated a percentage S1% of the dark pixels and a percentage S2% of the bright pixels.  Histogram-based stretching interval estimation.
  • 35. Auto Color Equalization  Rizzi, Gatta, Marini proposed in 2002-2004;  Similar to the Retinex color perceptual model;  Color constancy to adjust RGB for HVS [Land & McCann1971];  Adapt local contrast  expanding or compressing the dynamic range  Adapt the image to obtain global white balance.  A simplified model of HVS, smoothed local HE.  Acceleration by polynomial approximation of slope function or intensity level interpolation. slope function neighboring
  • 36. ACE Results
  • 37. Scaling the DCT Coefficients  Use of same scale factor for both DC and AC coefficients;  Scales chromatic components with same factor.  Adjustment of background illumination;  Use of DC coeff.;  Preservation of local contrast;  Enhancement factor fixed;  Preservation of colors: same scaling proportion in YUV space;  8x8 blocks for DCT (overlapping to avoid blocking artifact);
  • 38. DCT Histogram Shifting with Alpha Rooting  Adapting spatial domain technique into transform domain;  Logarithmic DCT coefficient histogram;  Applying shift in positive direction;  Equalization of DCT coefficient after affine transform;  Combining histogram shifting with alpha rooting
  • 39. DCT Histogram Shifting Results
  • 40. Unsharp Masking Edge preserving filter Adaptive gain control A gain function in YENI [Arici’06] a=1,b=7, c=21. K=1 ),(),(),( mnxmnxmnz smooth ),(),(),(),( mnzmnmnxmny   ),(),(),( mnxmnamny  Gain defined [Polesel’00]
  • 41. Adaptive UM Results
  • 42. Generalized Unsharp Masking  [Deng, IEEE T-IP, 2011];  Enhance both contrast and sharpness by the model component and the residual;  Reduce the halo effect by edge-preserving filter;  Avoid out-of-range by log-ratio & tangent operations. Detail Enhancement For log-ratio
  • 43. Generalized Unsharp Masking Result original only with CE only with DE with CE and DE
  • 44. Image-Decomposition based Detail Enhancement  Smooth base layer (large scale variations);  Residual detail layer (small scale details);  Contrast expansion/compression;  Compression in base layer;  Enhance in detail layers.  Edge preserving filter to extract the base layer;  Total Variation-based filter;  Bilateral filter;  EAW (Edge Avoiding Wavelet) filter;  Weighted LS filter;  Domain transform;  Multiple scales for flexible detail manipulation.
  • 45. Some Results of DT-based Detail Manipulation (a) Input (b) Detail layer D0 (c) Detail layer D1 (d) Detail layer D2
  • 46. Some Examples of WLS-based Detail Enhancement Original Image Fine Medium Coarse Combined
  • 47. HDR Imagery  A starlit night has an average luminance level of around 10^-3 candelas/m^2, and daylight scenes are close to 10^5 cd/m^2 ;  Humans can see detail in regions that vary by 1:10^4 at any given adaptation level, over which the eye gets swamped by stray light (i.e., disability glare) and details are lost;  CRT monitor's max display luminance is only around 100 cd/m^2;  A high-quality xenon film projector is still two orders of magnitude away from the optimal light level for acuity and color perception;  HDR (high dynamic range) image save each pixel with 4 bytes (32bits);  Two main methods for generating HDR imaging:  Use physically based renderers, generating basically all visible colors;  Take photographs of a particular scene at different exposure times, apply radiometric calibration to camera, combine calibrated images.
  • 48. Tone Mapping (Tone Reproduction)  Match one observer model applied to the scene to another observer model applied to the desired display image;  HDR needs tone mapping to display colors on LDR monitor;  Tone Mapping operators' 4 Categories  global (spatially uniform): Compress images using an identical (non-linear) curve for each pixel;  local (spatially varying): Achieve dynamic range reduction by modulating a non-linear curve for each pixel independently (local neighborhood);  frequency domain: Reduce the dynamic range of image components selectively, based on their spatial frequency;  gradient domain: Modify the derivative of an image to achieve dynamic range reduction. (Gradient domain HDR compression);  Video tone mapping: temporal coherence;  Note: Inverse tone mapping from LDR to HDR;
  • 49. Tone Mapping Based on Multi-Scale Decomposition for HDR Images  Input.img = Detail.img * Base.img: I(x,y) = R(x,y)*L(x,y);  Reflectance layer (large scale variations);  Illuminance layer (small scale details);  Apply edge preserving filter for decomposition too!  Bilateral, WLS, Domain Transform, BM3D,…  Compression in illumination layer (tone mapping);  Unchanged in reflectance layers. HDR Images - Edge Preserving exp() Ln() LDR images+ Compress* k
  • 50. Some Results of HDR Tone Mapping Bilateral WLS BM3D
  • 51. Some Results of HDR Tone Mapping Bilateral WLS BM3D
  • 52. inverse Tone Mapping: LDR to HDR  How to display LDR image to HDR display screen?  Dodging and Burning: inspired bothTM and iTM;  Dodging: blocking light from some image areas while illuminating others;  HDR companding [Li, Siggraph’05]: compress the contrast through tone mapping, then expand it;  Linear contrast scaling [Akyues, Siggraph’07]: a global expansion method;  Inverse tone mapping [Banterle, SCCG’08]:  Map LDR to middle dynamic range;  Find saturated areas by median cut, create expand map from density estimate;  Reconstruct lost luminance by interpolation (weights);  Attenuate artifacts by cross bilateral filtering (luminance-expand map).  Reverse tone mapping [Rempel, Siggraph’07]: similar to iTM.  Inverse gamma and noise filter for contrast stretching;  Smooth brightness and edge stopping for enhancement of saturated regions.
  • 53. Illustrat. of Reverse Tone Mapping Original (LDR) HDR tone mappingHDR (bright) HDR(dark)
  • 54. Illustrat. of Reverse Tone Mapping Original (LDR) HDR tone mappingHDR (bright) HDR(dark)
  • 55. Illustrat. of Reverse Tone Mapping HDR tone mappingHDR (bright) Original (LDR) HDR(dark)
  • 56. Illustrat. of Reverse Tone Mapping HDR tone mappingHDR (bright) Original (LDR) HDR(dark)
  • 57. Hue Preserving & Saturation Enhancement  Chromaticity diagram: gamut;  Hue preserving: HSI (hue-saturation-intensity) from RGB;  Methods of saturation enhancement:  1. Increase saturation by fraction;  2. Increase saturation to the maximum based on hue and lumin.;  3. Reduce lumin. by fraction, then increase saturat. by fraction;  4. Reduce lumin. by fraction, then increase saturat. to maximum;  Too strong color enhancement result in poor quality;  Increased saturation introduces noise in uniform areas;  “Out of gamut” problem in saturation scaling;  Saturation clipping: clip before transform back to RGB;  S-type transformation: nonlinear hue preserving.  Saturation normalization: histogram equalization in normalized HSI.
  • 58. Gamut in CIE-XY Chromaticity Diagram
  • 59. Hue Saturation Intensity (HSI) Space                      NNNNNN NNN NNN NNN BGBRGR BGR H I BGR SBGRI 2 1 2 2 cos 2 1 ,,min 1 3 1 
  • 60. Hue Preserving Saturation Enhancement (a) original (b) linearly enhanced (c) s-type function (d) clipping (e) normalization
  • 61. Hue Preserving Saturation Enhancement (a) original (b) linearly enhanced (c) s-type function (d) Clipping (e) Normalization
  • 62. References  K. Zuiderveld: Contrast Limited Adaptive Histogram Equalization. Graphics Gems IV, 1994.  Tomasi, C., Manduchi, R. Bilateral filtering for gray and color images. ICCV’98.  N.Moroney.Local Color Correction Using Non-Linear Masking, IS&T Color Imaging, 2000.  C. Gatta, A. Rizzi, D. Marini, ACE: An automatic color equalization algorithm, European Conference on Color in Graphics Image and Vision (CGIV02), 2002.  Durand, F., Dorsey, J. Fast bilateral Filtering for the display of high-dynamic-range images. ACM T- Graphics. 21(3), 2002.  A. Buadess, B. Coll, J. Morel. A non local algorithm for image denoising. CVPR, 2005.  R Fergus, B Singh, A Hertzmann, S. T. Roweis, W T. Freeman, "Removing camera shake from a single image", Siggraph, 2006  K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE T-IP, 16(8):2080–2095, 2007.  A Rempel, Trentacoste, Seetzen, Young, Heidrich, Whitehead, Ward. Ldr2Hdr: On-the-fly Reverse Tone Mapping of Legacy Video and Photographs. Siggraph, 2007.  L Yuan, J Sun, L Quan, HY Shum, "Image deblurring with blurred/noisy image pairs", SIGGRAPH , 2007.  J Jia, "Single Image Motion Deblurring Using Transparency", CVPR, 2007  Z. Farbman et al., Edge-preserving decompositions for multi-scale tone and detail manipulation, ACM Siggraph, 2008.  J Mukherjee, S Mitra, Enhancement of color images by scaling the DCT coefficient, IEEE T-IP, 17(10), 2008.  Q Shan, J Jia, A Agarwala, "High-Quality Motion Deblurring From a Single Image", SIGGRAPH 2008.
  • 63. References  T Arici, S Dikbas,Y Altunbasak, A Histogram Modification Framework and Its Application for Image Contrast Enhancement. IEEE T-IP, 18(9), 2009.  D. Krishnan and R. Fergus. "Fast Image Deconvolution using Hyper-Laplacian Priors". NIPS, 2009.  S Cho, S Lee, "Fast Motion Deblurring", Siggraph Asia, 2009.  N. Joshi, C.L. Zitnick, R Szeliski, and D Kriegman, "Image Deblurring and Denoising using Color Priors", CVPR 2009.  O. Whyte, J. Sivic, A. Zisserman J. Ponce, "Non-uniform Deblurring for Shaken Images", CVPR 2010.  Q Shan, J Jia, S B Kang, Z Qin, "Using Optical Defocus to Denoise", CVPR, 2010  A Gupta, N Joshi, C. L. Zitnick, M Cohen, B Curless, "Single Image Deblurring Using Motion Density Functions", ECCV, 2010.  S. Cho, J. Wang, S. Lee. Handling Outliers in Non-blind Image Deconvolution. ICCV, 2011  A. Levin, Y. Weiss, F. Durand, W. T. Freeman. "Efficient Marginal Likelihood Optimization in Blind Deconvolution". CVPR, June 2011.  E. Gastal, M. Oliveira, Domain transform for edge-aware image and video processing, ACM SIGGRAPH 2011.  K. Panetta, J. Xia, S Agaian. Color image enhancement based on the discrete cosine transform coefficient histogram, J. of Electronics Imaging, 21(2), 2012.  Y Tai, S Lin, "Motion-aware noise filtering for deblurring of noisy and blurry image", CVPR, 2012.  C Schuler, "Blind Correction of Optical Aberrations", ECCV'2012.  L Zhong, S Cho, D Metaxas, S Paris, J Wang, "Handling Noise in Single Image Deblurring using Directional Filters", CVPR, 2013.
  • 64. Appendix: Image Decomposition for Illumination and Reflectance (IR)
  • 65. Intrinsic Image  Proposed in [Barrow & Tenenbaum, 1978];  Decomposition of illumination and reflectance;  Illumination (shading as shadow and indirect lighting): amount of light incident at a point, i.e. irradiance;  Reflectance (albedo): how the object reflects light.  Application: colorization, retexturing, shadow removal, tone mapping, matting, white balancing, de-hazing, detail enhancement etc. • It is an ill-posed problem;  Rely on user indications or precise geometry (rough depth) to disambiguate it [Bousseau et al., 2009][Shen et al., 2011];  Single image source: various assumptions about both illumination and reflectance, such as structural sparsity and neighboring smoothness.
  • 66. Intrinsic Image  Retinex theory [Land & McCann, 1971];  Reflectance is piecewise constant while illumination is smooth;  A variational method [Funt et al., 1992];  An edge-preserving filter does the factorization task;  Uniform albedo polyhedra, diffuse ambient lighting for shading [Sinha & Adelson, 1993];  Discriminate Junctions: T junctions as reflectance variation, arrow and Y junctions as illumination;  Learning-based [Tappen et al., 2005];  Trained on image derivatives to classify gradients.  Non-local texture prior [Shen et al., 2008];  A set of neighbor pixels share the same texture config.;  Follow a Retinex algorithm [Kimmel, 2003], added with the texture constraint.  Similar chromaticity [Shen & Yeo, 2011];  Same reflectance for neighboring pixels as sparsity.
  • 67. Intrinsic Image Decomposition Comparison Reflectance (left to right): [Shen et al.’08] – [Shen & Yeo’11] – ground truth Illuminance (left to right): [Shen et al.’08] – [Shen & Yeo’11] – ground truth Original
  • 68. Appendix: Image De-Hazing for Radiance, Airlight and Transmission
  • 69. Image Degradation Model  Dichromatic atmospheric scattering model: I(x) = t(x)J(x)+(1-t(x))A;  J(x): surface radiance vector (haze free);  A: constant airlight color vector (veiling light);  t(x): medium transmission along the ray.  t(x)=exp(-r*d(x)) with d(x) as depth and r as scattering coeff.  The incoming light blended with the airlight (by atmospheric particles);  Haze induces its visual effect is blurring of distant objects ( a clue for depth inference, i.e. aerial perspective);  The loss of contrasts and colors due to haze or fog as image averaging with a constant color A.  De-hazing or de-foggy is ill-posed too!  multiple images (polarization) under different lighting [Joshi & Cohoen, 2010] or additional information (depth or geometry) [Kopf et al.’2008];  Assumptions or strong prior, esp. for single image.  De-noise and de-haze simultaneously or sequentially.
  • 70. De-hazing (or De-foggy)  [Fattal, 2008]: every patch has uniform reflectance, the appearance of the pixels within the patch expressed in terms of shading and transmission.  shading and transmission signals unrelated;  estimate the appearance of each patch by ICA;  Failure when mag. of the surface reflectance is much smaller than that of the air light;  [Tan, 2008]: Divide the image into a series of small patches and the corresponding patch in radiance should have a higher contrast;  neighboring pixels should have similar transmission values formulated in a Markov Random Field, solved by graph-cut or belief propagation;  Produce over enhanced images in practice.  [He et al., 2009]: dark channel prior (easy to apply bright channel prior);  The transmission of each patch estimated as the minimum color compo;  Soft matting to ensure neighboring pixels had similar transmission values.  [Kraz & Nishino, 2009]: use natural statistics of albedo and depth;  Factorial MRF with statis. independent latent layers for albedo and depth;  Solved a MAP problem by EM.  [Tarel & Hautiere, 2009]: bilateral filter to factor radiance and airlight;  Works not well at the depth discontinuities; could be better with guided filter?
  • 71. Guided Image Filtering  Problems in bilateral filter  Complexity  Gradient distortion: preserve edge, not gradient;  Guided filter: in all local windows , compute the linear coefficients and the average of in all local windows that covers pixel  Gradient preserving: q has an edge only if I has an edge  Integral images  O(1) time  Non-approximate Naturally O(N) time independent of the window radius mean variance
  • 72. Guided Image Filtering Linear regression Bilateral/joint bilateral filter does not have this linear model! Noise/texture
  • 73. De-hazing Results’ Comparison
  • 74. Fast Single Image Dehazing with Dark Channel Prior and Guided Filter Results
  • 75. Fast Single Image Dehazing with Dark Channel Prior and Guided Filter Results
  • 76. Fast Single Image Dehazing with Dark Channel Prior and Guided Filter Results
  • 77. Appendix: Machine Learning-based Image Enhancement
  • 78. Image Restoration using Online Photo Collections  image restoration leverages a large database of images gathered from the web;  efficient visual search to find the closest images which define the input’s visual context;  visual context as an image-specific prior for image restoration;  applications: white balance correction, exposure correction, and contrast enhancement.
  • 79. Personalization of image enhancement  First observe user preferences on a training set, and then learn a model of these preferences to personalize enhancement of unseen images;  The challenge of designing the system lies at intersection of computer vision, learning, and usability;  Active selection of an instance subset that share the highest mutual information with the rest of the high-dimensional space: defined as a sensor placement problem, i.e. each instance can be thought of as a possible sensor location, where a probe is placed in order to “sense” the space of images;  Distance metric learning: Learning a distance metric between images that reflects how far two images should be in terms of their enhancement paras.  Enabling seamless browsing for training: a user interface that explores the space of possible enhancements for each of the training images and indicate users’ preferences.
  • 80. Personalization of image enhancement Basic idea control parameters
  • 81. Personal photo enhancement using example images  correcting special types of images (faces of a photographer’s family and friends ) and use common faces across images to automatically perform both global and face- specific corrections;  Apply face detection to align faces between “good” and “bad” photos.
  • 82. Personal photo enhancement using example images Personal image enhancement pipeline
  • 83. Example-based image color and tone style enhancement  Learn implicit color and tone adjustment rules from examples;  Discover the underlying mathematical relationships optimally connecting the color and tone of corresponding pixels in all image pairs;  Locally approximated with a low-order polynomial model.
  • 84. Example-based image color and tone style enhancement
  • 85. Collaborative personalization of image enhancement  personalized preference in image enhancement tend to cluster?  whether users can be grouped accordingly?  such clusters do exist!  derive methods to learn statistical preference models from a group of users;  Collaborative filtering to automatically enhance novel images for new users.
  • 86. Collaborative personalization of image enhancement Overview of collaborative personalization of image enhancement
  • 87. Learning photographic global tonal adjustment  Creation of a high-quality reference dataset;  Predict a user’s adjustment from a large training set;  Describe a set of features and labels;  Single luminance remapping curve applied independently to each pixel;  Regression techniques such as linear least squares, LASSO, and GPR;  Sensor placement: select a small set of representative photos;  Difference learning: models and predicts difference between users; On this photo, the editors have produced diverse of outputs, from a sunset mood (b) to a day light look (f).
  • 88. Context-based automatic local image enhancement  A local method: relies on local scene descriptors and context in addition to high-level image statistics;  Searching for the best transformation for each pixel in the given image and then discovering the enhanced image using a formulation based on Gaussian Random Fields.
  • 89. Context-based automatic local image enhancement Illustration of basic concept. An example highlighting the local enhancement Overview of coarse-to-fine local enhancement method.
  • 90. Image Restoration by CNN  Collect a dataset of clean/corrupted image pairs which are then used to train a specialized form of convolutional neural network.  Given a noisy image x, predict a clean image y close to the clean image y*  the input kernels p1 = 16, the output kernel pL = 8.  2 hidden layers (i.e. L = 3), each with 512 units, the middle layer kernel p2 = 1.  W1 512 kernels of size 16x16x3, W2 512 kernels of size 1x1x512, and W3 size 8x8x512.  This learns how to map corrupted image patches to clean ones, implicitly capturing the characteristic appearance of noise in natural images.  Train the weights Wl and biases bl by minimizing the mean squared error  Minimize with SGD  Regarded as: first patchifying the input, applying a fully-connected neural network to each patch, and averaging the resulting output patches.
  • 91. Image Restoration by CNN  Comparison.
  • 92. Learning-to-rank approach for image color enhancement  Take into account the intermediate steps taken in the enhancement process, which provide detailed information on the person’s color preferences;  Formulate the color enhancement task as a learning-to-rank (LTR) problem in which ordered pairs of images are used for training, and then various color enhancements of a novel input image can be evaluated from their corresponding rank values;  Ranking is the central component of many information retrieval (IR);  LTR method employed is Multiple Additive Regression Trees (MART) .  From the parallels between the decision tree structures used for ranking and the decisions made by a human during the editing process, breaking a full enhancement sequence into individual steps can facilitate training.
  • 93. Learning-to-rank approach for image color enhancement
  • 94. Automatic Photo Adjustment Using Deep Learning  Explore the use of deep learning in the context of photo editing;  Introduce an image descriptor (pixel, context and global) that accounts for the local semantics of an image. Middle (from top to bottom): input image, semantic label map and the ground truth for the Local Xpro effect; Left and right: color mapping scatter plots for four semantic regions.
  • 95. Automatic Photo Adjustment Using Deep Learning The architecture of the DNN Multi-scale spatial pooling schema Pipeline for constructing the semantic label map
  • 96. Automatic Photo Adjustment Using Deep Learning Three Stylistic Local Effects: 1. Local Xpro, 2. Foreground Pop-Out, 3. Watercolor.
  • 97. Reference  K. Dale, M. K. Johnson, K. Sunkavalli, W. Matusik, H. Pfister. Image restoration using online photo collections. ICCV, 2009;  S B Kang, A Kapoor, D Lischinski, Personalization of image enhancement. CVPR’10;  N Joshi, W Matusik, W., E H Adelson, D J Kriegman, Personal photo enhancement using example images. ACM T-Graph. 29, 2 (April), 2010;  B Wang, B., Y Yu, Y-Q Xu, Example-based image color and tone style enhancement. ACM SIGGRAPH 2011;  J Caicedo, A Kapoor, S B Kang, Collaborative personalization of image enhancement. CVPR, 2011;  V Bychkovsky, S Paris, E Chan, F Durand, Learning photographic global tonal adjustment with a database of input/output image pairs. CVPR ’11;  S J Hwang, A Kapoor, S B Kang. Context-based automatic local image enhancement. ECCV’12;  D Eigen, D Krishnan, R Fergu, Restoring An Image Taken Through a Window Covered with Dirt or Rain. ICCV’13;  J. Yan, S. Lin, S.B. Kang, X. Tang, A learning-to-rank approach for image color enhancement, CVPR 2014;  Z Yan, H Zhang, B Wang, S Paris, Y Yu, Automatic Photo Adjustment Using Deep Learning, ACM Siggraph Asia, 2014.
  • 98. Appendix: Chroma Contrast Preserving in Color to Grayscale Conversion  Color to Grayscale Conversion: Dimension reduction 3D to 1D;  Display has olny Y-range for contrast condensing (isoluminant color problem);  Local method: Local changes, contradictions, more comput. costs  Color contrast map to enhance gray image;  Same color may output different gray value, haloing artifacts may produce.  Global method: Same luminance for the same RGB triplets;  Speed, naturalness, luminance range, color feature optimized in conversion;  Mostly, color order is strictly satisfied, might be ambiguous (culture, person).  Bala & Eschbach [2004, Color Imaging Conf.]: high-frequency chrominance;  Rasche et al. [2005, IEEE CG&A and EG]: MDS in color quantization;  Gooch et al. [2005, ACM Siggraph]: Saliency preserving on local contrasts;  Grundland & Dogson [2005, PR]: image dependent piecewise linear mapping  Neumann et al. [2007, CAGVI]: Gradient-inconsistency correction  Smith et al. [2008, Eurograph]: enhance greyscale to reproduce original contrast  Lu, Xu & Jia [2012, ICCP]: relax color order constraints by human perception;
  • 99. Comparison of Color2Gray Conversion Original color image CIELab’s Conversion. Lu, Xu, Jia's Conversion. Gooch05's conversion Grundland05's Conversion. Smith08's Conversion Rasche05's conversion. Bala04's Conversion. Neumann07’s Conversion
  • 100. Appendix: Matlab Image functions  Histogram equalization:  J = histeq(I,hgram);  CLAHE: contrast limited adaptive HE  adapthisteq(L,'NumTiles',[4 4],'ClipLimit',0.005, 'Alpha', 0.6)*255;  Clipping with gamma correction:  J = imadjust(I,[low_in; high_in],[low_out; high_out],gamma);  Unsharp masking (binary mask BW is defined first)  h = fspecial('unsharp'); I2 = roifilt2(h,I1,BW);  Bilateral filter  [k,j] = meshgrid(-r:r,-r:r);  h = exp( -(j.^2 + k.^2)/(2*sigma_s^2) ) .* ... exp( -(w - w(r+1,r+1)).^2/(2*sigma_d^2) );  y(m,n) = h(:)'*w(:) / sum(h(:));  Gamma correction  Gamma=1.5; Correction = 255 * (x/255).^ Gamma;
  • 101. Appendix: OpenCV Image functions  Brightness and contrast adjustments  saturate_cast<uchar>( alpha*( image.at<Vec3b>(y,x)[c] ) + beta );  Gamma correction: LUT can accerlerate it.  cvPow(img, trans, gamma);  cvConvertScaleAbs(trans, trans, 1, 0);  Gpu::gammaCorrection();  Histogram Equalization: single channel  cvtColor( src, src, CV_BGR2GRAY );  equalizeHist( src, dst );  Unsharp masking  GaussianBlur(im, tmp, Size(5,5), 5);  addWeighted(im, 1.5, tmp, -0.5, 0, im);  Box filter for sharnpening  boxFilter(image, bImg, -1, ksize, anchor, normalize, BORDER_REFLECT);  edgeImage = image - bImg;  image = image + gain*edgeImage;  Contrast Limited Adaptive Histogram Equalization  openCv2.4.5 version;  Gpu::CLAHE::apply();