# Problem 3

```Noise Filtering Problem
Objective and Learning Outcomes
The objective of this experiment is for students to eliminate or reduce the effect
noise present in an image. From this experiment, students will acquire the
following skills:
1. Learn knowledge regarding spatial and frequency domain operations.
2. Apply various type noise filtering technique in reducing affect of noise
3. Use appropriate filtering technique for different type of noise.
4. Implement Matlab for filtering operation on a given image.
Introduction
In carrying out the filtering process, the filtering process can be done either
directly to the image itself or in its transform domain. Filtering the image directly
using the x and y coordinates is known as filtering in the spatial-domain filtering.
In the transform domain, normally the image is transformed into the frequency
domain. The transformation is achieved through the mean of 2-D Fourier
transform operation. Hence, filtering operation in this domain is known as
frequency-domain filtering.
Spatial-Domain Filtering Approach
The filtering in the spatial domain involves convolving the noise image with a
mask or window of size w x w, where this w is normally odd integer such as 3, 5, or
7. Rarely it gets larger than these sizes. This 3 x 3 mask is shown in Figure 1 and the
center of the window (w5) will be the pixel that is to be modified.
w1 w2 w3
w4 w5 w6
w7 w8 w9
Figure 1. 3 x 3
In the convolution process, this mask will be multiplied point-by-point with the
corresponding image in an area of 3 x 3 with the pixel corresponding to the
center of the mask (w5) being the pixel to be modified. The results of the pointwise multiplication are then summed up and this is value is then used to replace
the center pixel. This process is illustrated in Figure 2.
w1
w2
w3
w4
w5
w6
w7
w8
w9
z1 z2 z3 z10
z4 z5 z6 z11
z7 z8 z9 z12
z13 z14 z15
Image to be filtered
Figure 2
From Figure 2 above, the filtering operation works as follows. At the beginning
when the mask is overlaid onto the image, w5 (center of the filter mask) will be
aligned with z5 (pixel in the image to be filtered). The value for z5 will be replaced
by a value which is computed as follows:
Eq. (1)
Once this is done, the filter mask will be shifted to the left so that its center, w5, is
now aligned with z6 (refer to Figure 2 above). Then the point-wise multiplication
and addition as in Eq. 1 is repeated as given in Eq. 2 below:
Eq. (2)
Note that as the filter mask is shifted to the left, the filter kernel will be multiplied
by the corresponding image pixel values. To prevent loss of information, the
result is normally written on a different memory array of the same size of the
image array. In other words, the new value is not overwritten onto the image
array.
Thus, the process is repeated until the mask reach the end of the row. Next, the
mask will be aligned to the subsequent row i.e. centered at z8 (refer to Figure 2
above). Again, the process of point-wise multiplication and addition is
computed. The filter mask is shifted in this manner until it reaches the 2nd last row
of the image and the 2nd last column of the image. This process of shifting the
filter mask from left to right and top to bottom and calculating the sum of the
point-wise multiplication is known as the 2-D convolution process.
Note, in this process convolution, pixels in the 1st and last row as well as 1st and
last columns will not be changed since the filter mask cannot fit exactly onto it.
This is the case for a 3 x 3 mask. In general for a w x w the number rows and
column not process would be w – 1.
Since, the computation in Eq. (1) (as well as Eq. 2) involves not only the pixel to
be modified but also its neighbours, this process in general is called the
neighbourhood processing. For programming purposes, normally the center
pixel is denoted as at location (x,y) and its surrounding or neighbouring pixels are
located according to the diagram shown in Figure 5 below. Note that positive y
is downward position while positive x is towards the right.
x
(x–1, y–1)
(x, y–1)
(x+1,y–1)
y
(x–1, y)
(x–1,y+1)
(x,y)
(x, y+1)
(x+1, y)
(x+1,y+1)
Figure 3 – Neighbourhood coordinates system
The question to ask then is, what are the values for the w’s? Obviously, different
values for w will give different results. Here, several types of different filter mask
will be studied.
1. Averaging Filter
The filter mask for averaging filter is as shown in Figure 4. The kernel of this filter
consists of constant 1. The scaling factor of
is to guarantee that the result of
summing up the point-wise multiplication will not run off the allowable dynamic
range of the intensity level. For example for an 8-bit image, the scaling factor will
ensure that the resulting intensity will not exceed 255.
1 1 1
1 1 1
1 1 1
Figure 4 – Averaging filter
This type of filter can be used for reducing uniform type of noise as well as
Gaussian noise. Another important aspect in using this filter is to keep the
window or filter size to be small. Larger filter size (such as 9 x 9 or bigger) will
cause the resulting filtered image to be blurred. Important information such as
edges will be less sharp.
2. Gaussian Lowpass Filter
Gaussian filter is more preferable to overcome many type of noise. It is very
effective for handling Gaussian or normal noise. The design of its kernel is mainly
heuristic and based on observation rather than mathematically derived. Based
on its shape (See theory for noise modeling problem – Lab 2), the center value
should be higher and positive in value compared to its neighbor values. One
possible choice would be as shown in Figure 5
1 2 1
2 4 2
1 2 1
Figure 5 – Averaging filter
Again, the factor of
is to ensure that the resulting pixel value will not exceed
that of the allowable dynamic range for the given image.
As for the averaging filter, the filter size for the Gaussian lowpass filter should also
not to be big for the same reason. The normal size usually used for this filter is 3 x
3, 5 x 5, and 7 x 7.
3. Median Filter
Median filter is under the category of non-linear statistical filter. This filter is based
on rank ordering in such that the computation is different from that of Eq. 1 or
Eq. 2. What this filter does simply rearrange pixel values within the w x w window
and replace the center value with its median value i.e. the
th
value. To
illustrate this, suppose the value of the pixels under the 3 x 3 window is as follows:
20
15
19
30
50
25
21
20
19
Rearranging these pixel values in an ascending order will be:
15, 19, 19, 20, 20, 21, 25, 30, 50. The median is the 5th value which is 20. Thus pixel
value 50 will be replaced by the value 20. Since this operation is not reversible,
this is why median filter is a non-linear operation.
Median filter works tremendously well for salt-and-pepper type of noise where
any lowpass filter would not work. Median filter preserves important information
such as edges very well and it can be applied several times without degrading
much the image.
Frequency-Domain Filtering Approach
An alternative way to combat noise is to perform the filtering process in the
frequency domain. In order to do this, the image to be processed must be
transformed into the frequency domain using the 2D Fourier transform. Then the
filter must be designed also in the frequency domain. Once the filter is ready, it is
then multiplied with the transformed image (convolution in time domain will be
mapped to multiplication in the time domain – remember? You learnt this in
your DSP subject right?). This operation is illustrated in Eq. 3.
where H(k,l) is the 2D lowpass filter, F(k,l) is the transformed image, and G(k,l) is
the transformed output. Then, to obtain the filtered image, inverse Fourier
transform is performed onto G(k,l). This will result in g(x,y) which is viewable. This is
shown in Eq. 4.
Unlike the spatial-domain filtering, frequency-domain filtering operation performs
on the entire image at one time. Because of this, the size of the filter in the
frequency-domain must be of the same size of the image itself. Note that, the
transformed image has exactly the same size as its spatial-domain image.
Let’s analyze further what happen when an image undergoes Fourier transform
process. Figure 6 shows a spatial-domain image and its frequency-domain
representation.
Spatial-domain
image
Figure 6
Frequency-domain
representation
Note that in the frequency domain representation, the position of the origin (0,0)
has been shifted right in the middle of the image whereas in the spatial-domain
image the position of the origin is at its usual position (top left corner). Please
refer to the supplementary notes on how to shift the origin to the center of the
image. Thus, the center of this image is the location of the DC component of the
image and has the largest value (shown as the brightest spot). Surrounding this
center is the low frequency components of the image. So it can be seen that
most of the frequency of the image is concentrated in the low the frequency.
Moving away towards the borders of the image are the locations for the high
frequency components of the image. These are the frequencies representing
the details of the image such as edges and boundaries within the image.
Figure 7 shows an image with a sinusoid noise. In the frequency domain, this
noise appears as two spots (shown in red circles)
Noise image
Frequency
Figure 7
Hence, to remove this two bright spots and retain the low frequency
components, a lowpass filter must be used. Here, two types of lowpass filter will
be considered
1. Butterworth lowpass filter
2. Gaussian lowpass filter
Butterworth Lowpass Filter
The magnitude squared function for the 2D Butterworth filter is given as follow:
where
N – is the filter order
D0 – is the filter cutoff locus (distance from the origin)
D(k,l) – is the filter characteristic (shape) with its center at the origin.
.
Normally a circular shape is used and is given as
To adjust so that this filter is centered exactly at the center of the
image (assuming that the size of the image is M x N), then following
equation is used
.
Figure 8 shows the 3D perspective plot for this lowpass filter for N = 2.
Figure 8 – Butterworth lowpass filter for N =
2
To design a Butterworth filter so that the cutoff frequency will be at -3dB of its
maximum then the following equation should be used:
Gaussian Lowpass Filter
The magnitude spectrum expression for this filter is given as follow:
The definition for the variables are the same for the Butterworth design. Figure 9
shows the 3D perspective view of this Gaussian lowpass filter of 2nd order.
Figure 9 – Gaussian lowpass filter for N = 2
Matlab Implementation in Image Processing Problems
Please refer to Lab 1 for getting help and tips for how to read and display
images.
The next useful functions that will be needed in this lab is the Fourier and inverse
Fourier transform for doing the filtering in the frequency domain. To achieve this
Matlab has provided the following functions: fft2 and ifft2 for 2D transform
operation. Next, to ensure that the origin of the transformed image is in the
middle of the image, fftshift command can be used. However, to use these
functions you need to change the data type from uint8 to double type. This
can be done simply by casting the data using the data type command. As an
example:
% variable x by default will be uint8 data type
>x = double(x); % now variable x is of double precision
One thing about Matlab programming is that the software is not optimized for
performing “for – loop” operation. One will notice that the time to perform big
nested for loop (as normally the case for image processing) will take a longer
time compared to using matrix operation. Thus in doing this, there are a few
commands that can be used to perform spatial-domain filtering. These
commands are blkproc, colfilt, and nlfilter. Further help can be
Download the images shown below in Figure 10 from the DSP PBL Lab website.
Write your own Matlab to perform the following task. You are not allowed to use
any function in Image Processing Toolbox except for imread and imshow
functions.
1. Perform the spatial-domain filtering operation on each of these images
using averaging filter, Gaussian filter, and median filter. In other words for
each image apply all the the three filters. Discuss on the result obtained
and draw some conclusions based on the observation of these results.
2. Perform the frequency-domain operation on each of these images using
Butterworth lowpass filter and Gaussian lowpass filter of fixed D0. Use few
several orders (say N = 1, 2, 3 and 4 for example but do not exceed N = 7).
Draw some conclusions you obtained based on your experiments on
different filter orders. Also compare the performance between
Butterworth and Gaussian filters. The comparison can be done simply by
the visual appearance of the results after the filtering process. Note: you
may need to perform some contrast stretching operation after the inverse
filtering operation in order to have a visually good appearance image but
make sure the parameters you apply are the same through out the
image.
3. Compare the performance between results you obtained using the
spatial-domain filtering operations you’ve done in part 1 to those results
you obtained using the frequency-domain filtering operations. Your
discussion should be based on for each image and not a general one.
4. Try varying different kernel filter values for the averaging and Gaussian
filters and apply them to same image sets. Make sure the scaling factor is
chosen accordingly as mentioned before. Discuss on the results you’ve
observed.
5. Finally using N = 2, try varying D0 for several values (smaller and bigger
than the one you used in 2) and observe the results. Discuss of these