3. INTRODUCTION
• Different natural phenomena can reduce the quality of images and diminish
the visibility.
• Visibility distance is decreased because of the absorption and scattering of
light by the atmospheric particles.
• Images of outdoor scenes, captured during fog conditions, are drastically
degraded.
3
4. • Due to the presence of fog, the visibility distance decreases exponentially.
• Negative effects of fog on the quality of the image are the loss of contrast and
the alteration of the natural colours in the captured image.
• The scattering effect of the transmitted light causes additional lightness in parts
of the image. This effect is called air-light or atmospheric veil.
• Several algorithms for restoring the contrast of foggy images.
4
5. • These methods can be categorized in two groups: model and non-model based
enhancement techniques.
• Non-model based methods perform image enhancement relying only on the
information obtained from the image.
• Unfortunately, these methods do not maintain colour fidelity and are not
suitable for real time computer vision.
• Model based contrast restoration techniques can be further divided in two
categories : with given depth and unknown depth.
5
6. • When the depth is supposed to be known, this information can be used to
restore the original contrast of the image.
• The depth is inferred by using the altitude, tilt and position of the camera.
• It through the manual approximation of the sky area and vanishing point in the
captured image.
• Contrast restoration techniques can be used in a wide range of applications like
in photography fields.
6
7. KOSCHMIEDER’S LAW
• A relationship between the attenuation of an object’s luminance L at distance
d and the luminance L0 close to the object
• L∞ is the atmospheric luminance and β is the extinction coefficient
• This equation states that the luminance of an object seen through fog is
attenuated with an exponential factor e−βd
7
……………(1)
8. 8
• The atmospheric veil obtained from daylight scattered by fog between the object
and the observer is expressed by L∞ (1 − e−βd )
• The response function of a camera can be applied to the Koschmieder’s
equation for to model the mapping from scene luminance to image intensity.
• Thus, the intensity perceived in the image is the result of a function f applied to
the sum of the air light A and the direct transmission T
……………(2)
9. 9
The eqn represents a linear mapping and assuming that
So we get
where R represents the pixel intensity of the image without fog and A∞ is the
intensity of the sky in fog conditions.
……….(3)
……….(4)
……….(5)
11. 11
• Our approach is to estimate the atmospheric veil and then to use it in
order to compute the original fog free image.
A. Algorithm Overview
• By using a single image we are not able to compute the depth in the scene
hence we introduce the notion of atmospheric veil (V ) as
……….(6)
……….(7)
……….(8)
……….(9)
12. 12
• The intensity of the sky (A∞) is considered to be equal to 255 or it can be
inferred as the maximum intensity in the image.
B. Obtaining the Atmospheric Veil
• Our visibility enhancement method must be able to work with both gray
scale and color images.
• For to compute the atmospheric veil for color images, use input as gray
level image W that consists in the minimum of each color channel (R,G,B).
13. 13
• This image is called the dark channel prior of a color image.
• The obtained veil V provides the amount of white that must be subtracted
from each color channel
• In order to infer the atmospheric veil we must first examine some
properties that it must have.
• First, the photometric constraint must be introduced , V must be higher or
equal to zero and V must be lower than W
……….(10)
14. 14
0 ≤ V ≤ W
• Another property of the atmospheric veil is that V must be smooth
function
• No black pixel constraint states that the local standard deviation of the
enhanced pixels around a given pixel position must be lower than its local
average.
• We can infer that the veil is smaller or equal to the difference between the
average and the standard deviation of the input image W
V ≤ Average(W) ≤ std(W)
……….(11)
……….(12)
15. 15
• A median filter with a variable size (k) will be applied on the W image
instead of using the classical average
M = mediank (W)
• Use the classical standard deviation computed in each pixel of the image
on the obtained W image
• Fog is more present in the top part of the image and tends to disappear in
the bottom part of the image.
• So the median filter along columns is the one that yields the best results.
……….(13)
16. 16
• Also consider that only a percentage p will be used to calculate the value for
the atmospheric veil in each pixel.
• This percentage is used to control the strength of the restoration process.
• The usual values for p are set from 85% to 99%.
• So the equation for computing the atmospheric veil becomes
……….(14)
17. 17
C. Exponential Inference of the Atmospheric Veil
• The atmospheric veil computed with this method is over compensated in the
bottom part of the image, thus resulting in very dark restored images.
• So this method is not suited for traffic scenes, because it does not model the
link between the veil and the distance to the objects in the scene.
• There is a need for a smooth exponential function.
• For this reason we will model an exponential filter on the atmospheric veil
such that this exponential function decreases inversely with the distance.
18. 18
• Our approach is to model the atmospheric veil with an exponential filter in
order to recover this link.
• We treat the whole atmospheric veil as an exponential function.
19. 19
• The final formula for computing the atmospheric veil is
Vfinal = V . G
where G is an exponential function with values between 0 and 1.
• For modelling the function G, start from two exponential functions from the
partition of unity
• We can call them squared and modulus partition of unity functions.
• Let fso : [−a, a] → [0, 1] (squared) and fmo : [−a, a] → [0, 1] (modulus) having
the following form.
……….(15)
20. 20
• For using the functions fso and fmo in image processing, we must modify
them such that they are defined in the image domain and take values in the
[0,1] interval.
• For this reason the variable x will denote the image lines.
• Hence we obtain fs : [0,H − 1] → [0, 1] and fm : [0,H − 1] → [0, 1], two new
exponential functions with the following form:
……….(17)
……….(16)
22. 22
• A translation of our exponential function (along the x axis) has to be applied
by using a linear isomorphism A : [vh,Max] → [0,H − 1]
A(x) = ax + b
having the following properties
……….(20)
……….(21)
23. 23
By solving the system of equations presented in equation (21) the final
exponential function becomes
where c = (H − 1)/(Max − vh)
……….(22)
28. 28
• The rate of new visible edges in the enhanced images. This metric can be
used in order to assess the quality of the restoration process.
• The percent of new visible edges in the enhanced image e is represented by
the following formula: nr : total number of edge points in the
enhanced images
no : number of edge points in the original
foggy image.
• The percentage of pixels that become completely black or completely white
after restoration
……….(23)
……….(24)
29. ADVANTAGES
• Single-image is needed for this enhancement algorithm.
• Able to obtain superior reconstructions of the original fog-free image when
compared with traditional methods.
• It has the ability to adapt the model in accordance to the density of the fog.
• Suitable for contrast restoration in both homogeneous and heterogeneous
fog conditions.
• It can perform contrast restoration in real time for both color and grayscale
images.
29
30. CONCLUSION
• A new image enhancement method was proposed by taking into account the
exponential decay present in foggy images.
• Computes the restored image by estimating the atmospheric veil.
• The clarity of the reconstructed scene is higher than the one using other
median type filters especially in the regions of the image with many details
• Methods using the squared and modulus translated exponential functions
achieve the best results for image enhancement in fog conditions.
30
31. 31
REFERENCE
• V. Cavallo, M. Colomb, and J. Doré, “Distance perception of vehicle rear lights in fog.” Human Factors,
vol. 43, no. 3, pp. 442–451, 2001.
• M. Negru and S. Nedevschi, “Image based fog detection and visibility estimation for driving assistance
systems,” in Proc. IEEE Int. Conf. ICCP,Sep. 2013, pp. 163–168.
• J. Oakley and H. Bu, “Correction of simple contrast loss in color images,” IEEE Trans. Image Process.,
vol. 16, no. 2, pp. 511–522, Feb. 2007.
• M. Pavlic, H. Belzner, G. Rigoll, and S. Ilic, “Image based fog detection in vehicles,” in Proc. IEEE IV,
Jun. 2012, pp. 1132–1137.
• K. Mori et al., “Recognition of foggy conditions by in-vehicle camera and millimeter wave radar,” in
Proc. IEEE Intell. Veh. Symp., Jun. 2007, pp. 87–92.