# Write about histogram equalization.

1 week ago

### Histogram Equalization:

Consider for a moment continuous functions, and let the variable r represent the gray levels of the image to be enhanced. We assume that r has been normalized to the interval [0, 1], with r=0 representing black and r=1 representing white. Later, we consider a discrete formulation and allow pixel values to be in the interval [0, L-1]. For any r satisfying the aforementioned conditions, we focus attention on transformations of the form

that produce a level s for every pixel value r in the original image. For reasons that will become obvious shortly, we assume that the transformation function T(r) satisfies the following conditions:

(a) T(r) is single-valued and monotonically increasing in the interval 0 ≤ r ≤ 1; and

(b) 0 ≤ T(r) ≤ 1 for 0 ≤ r ≤ 1.

The requirement in (a) that T(r) be single valued is needed to guarantee that the inverse transformation will exist, and the monotonicity condition preserves the increasing order from black to white in the output image.A transformation function that is not monotonically increasing could result in at least a section of the intensity range being inverted, thus producing some inverted gray levels in the output image. Finally, condition (b) guarantees that the output gray levels will be in the same range as the input levels. Figure 4.1 gives an example of a transformation function that satisfies these two conditions.The inverse transformation from s back to r is denoted

It can be shown by example that even if T(r) satisfies conditions (a) and (b), it is possible that the corresponding inverse T^{-1} (s) may fail to be single valued.

Fig.4.1 A gray-level transformation function that is both single valued and monotonically increasing.

The gray levels in an image may be viewed as random variables in the interval [0, 1].One of the most fundamental descriptors of a random variable is its probability density function (PDF).Let p_{r}(r) and p_{s}(s) denote the probability density functions of random variables r and s, respectively,where the subscripts on p are used to denote that p_{r} and p_{s} are different functions.A basic result from an elementary probability theory is that, if p_{r}(r) and T(r) are known and T^{-1} (s) satisfies condition (a), then the probability density function p_{s}(s) of the transformed variable s can be obtained using a rather simple formula:

Thus, the probability density function of the transformed variable, s, is determined by the gray- level PDF of the input image and by the chosen transformation function. A transformation function of particular importance in image processing has the form

where w is a dummy variable of integration.The right side of Eq. above is recognized as the cumulative distribution function (CDF) of random variable r. Since probability density functions are always positive, and recalling that the integral of a function is the area under the function, it follows that this transformation function is single valued and monotonically increasing, and, therefore, satisfies condition (a). Similarly, the integral of a probability density function for variables in the range [0, 1] also is in the range [0, 1], so condition (b) is satisfied as well.

Given transformation function T(r),we find p_{s}(s) by applying Eq. We know from basic calculus (Leibniz’s rule) that the derivative of a definite integral with respect to its upper limit is simply the integrand evaluated at that limit. In other words,

Substituting this result for dr/ds, and keeping in mind that all probability values are positive, yields

Because p_{s}(s) is a probability density function, it follows that it must be zero outside the interval [0, 1] in this case because its integral over all values of s must equal 1.We recognize the form of p_{s}(s) as a uniform probability density function. Simply stated, we have demonstrated that performing the transformation function yields a random variable s characterized by a uniform probability density function. It is important to note from Eq. discussed above that T(r) depends on p_{r}(r), but, as indicated by Eq. after it, the resulting p_{s}(s) always is uniform, independent of the form of p_{r}(r). For discrete values we deal with probabilities and summations instead of probability density functions and integrals. The probability of occurrence of gray level r in an image is approximated by

where, as noted at the beginning of this section, n is the total number of pixels in the image, n_{k} is the number of pixels that have gray level r_{k}, and L is the total number of possible gray levels in the image.The discrete version of the transformation function given in Eq. is

Thus, a processed (output) image is obtained by mapping each pixel with level r_{k} in the input image into a corresponding pixel with level s_{k} in the output image. As indicated earlier, a plot of p_{r} (r_{k}) versus r_{k} is called a histogram. The transformation (mapping) is called histogram equalization or histogram linearization. It is not difficult to show that the transformation in Eq. satisfies conditions (a) and (b) stated previously. Unlike its continuos counterpart, it cannot be proved in general that this discrete transformation will produce the discrete equivalent of a uniform probability density function, which would be a uniform histogram.

Fig.4.2 (a) Images from Fig.3 (b) Results of histogram equalization. (c) Corresponding histograms.

The inverse transformation from s back to r is denoted by

###### Raju Singhaniya

Oct 14, 2021