Lab: Edge Detection

CSC 295 - Computer Vision - Weinman



Summary:
We calculate image gradients to find edges at multiple scales, strengths, and orientations.
Due:
2/17

Deliverables

Preparation

Load the following file from the MathLAN and convert it from 8-bit to double values:
/home/weinman/courses/CSC295/images/bug.png

Exercises

A. Gradient Components

  1. Create a 1-D Gaussian kernel with variance 4 using gkern.
    gauss = gkern(4);
  2. We can also create a 1-D derivative of Gaussian kernel with variance 4 by giving another argument to gkern that specifies how many times we want to take the derivative:
    dgauss = gkern(4,1);
    (For completeness, you could have given the argument 0 in the previous exercise to indicate taking the derivative zero times.)
  3. Recall that conv2 accepts two separable kernels for filtering: one to operate along rows and the other to operate along columns. Use your two Gaussian kernels to calculate the partial derivative of the bug image along the rows (that is, the horizontal partial derivative). Have it return an answer only for the valid portions of the convolution (the last parameter should be 'valid' rather than 'same').
  4. Display your result as an image, remembering that the partial derivative may be positive or negative. Thus, you will need to tell imshow to relax its displaying conventions.
  5. Save your image using a combination of imwrite and imadj (a custom procedure that linearly remaps low (high, resp.) values to 0 (1, resp.)). For example,
    imwrite(imadj(X),'mypic.png');
  6. Interpret your result. Where is it bright? Where is it dark? Where is it gray? In all three cases: why?
  7. Calculate the partial derivative of the image along the columns in a similar fashion. Display and save your result.
  8. Inspect your result, making similar observations as in A.6.

B. Gradient Magnitude

  1. Create an image representing the magnitude of the gradient at each pixel location. (Recall that the magnitude of a vector is the root of the sum of the squares of its components.)
  2. Display, save, and inspect your image. Where are the strongest responses? How do they correspond to the values of the partial derivatives?
  3. It is typical to place a threshold on the gradient magnitude so that the edge detection result is binary. Use a vectorized "greater than" operation on the gradient magnitude and display your thresholded image.
    You may wish to start with a threshold of 0.02 and adjust it to find what you think is a good threshold for a nice edge image.

C. Gradient Orientation

  1. As we discussed in class, use atan2 to create an image representing the orientation/direction of the gradient at each pixel location.
  2. Recall that the gradient orientation may be in the range [-pi,pi]. We can tell imshow to treat these as its bounds for display (black to white) explicitly, i.e.,
    imshow(X,[-pi pi]);
    Display your orientation image in this fashion.
  3. Black and white values are not very intuitive for interpreting orientation. Fortunately, Matlab has a built-in way of changing the way actual image values are mapped to display values. Much like you've done before manually, you can explicitly change the map using the command colormap. This requires an argument. The procedure hsv creates a circular color map that is useful for such visualizations. Apply this to your figure:
    colormap(hsv); % Change the map of the 
                   % current figure to "hsv"
    colorbar;      % Add a color bar to the figure 
                   % to aid interpretation
  4. Use print to save your orientation image. (Recall that afterward you may wish to use
    $ convert -trim in.png out.png
    to tighten up the boundaries.)
  5. Spend some time analyzing the orientations. Where do the colors indicate the gradient points horizontally? "Diagonally?" Vertically? Be sure to distinguish direction (i.e., left versus right.) Do these make sense with respect to the image contents?

D. Gradient Orientation Revisited

Having the orientation displayed as a bright color where there is no strong edge is rather misleading. Instead, we'd like to be able to display no colors where there is no edge, and have the colors on the strong gradients indicate the orientation, as in Part C. We can do this by using an alternative color representation. Instead of thinking about the color contributions of red, green, and blue, components (the RGB color model), we can separate the color into three different components
Hue:
the pure chroma
Saturation:
the amount of color present
Value:
the perceived brightness
Like RGB, we can model HSV colors with three components, each in the range 0-1. Matlab knows how to convert an image in HSV colorspace into an RGB for display. We will use this to encode our orientation image in a more meaningful fashion.
Color will still be used to encode orientation, but we will use the saturation channel to encode the strength of the edge at each location. Thus, if there is no edge, there will be no saturation and thus no color. We can keep the value at a constant brightness, perhaps white.
  1. To represent the hue, rescale your previous orientation image so that instead of the range [-pi,pi] it has the range [0,1].
    Hint: To rescale a quantity x from [a,b] to [0,1], something you will do quite often, you can use the transform
    xrescale=(x-a)/(b-a).
  2. To represent the saturation, create a rescaled version of the gradient magnitude image so that the maximum value in the rescaled version is 1. Hint: Here we can simply divide by the largest value.
  3. To represent the value, create an image of all ones the same size as your other images.
  4. To create the final HSV image, concatenate these M×N matrices along the third dimension into one M×N×3 image using the cat procedure, i.e.,
    W = cat(3,X,Y,Z);
  5. Convert your new HSV image to an RGB image using hsv2rgb, i.e.,
    B = hsv2rgb(A);

E. Edge Detection and Scale

Now that you have analyzed all the parts, we want to investigate how detecting edges depends on the scale (i.e., Gaussian standard deviation) used to calculate the gradients.
  1. Place your commands for creating the gaussian kernels, partial derivatives, gradient magnitude, and weighted orientation (the RGB version from D.5) inside a for loop that calculates these for the following Gaussian variances: 1, 2, 4, 16, and 32.
  2. Inside your loop, add commands to save your rescaled magnitude image (from D.2) and color orientation image as PNG files.
    Hint: To automatically create appropriate file names, you can use num2str to convert from numbers to strings as done in the image formation lab.
  3. We also need to threshold our edges so that we have binary detections. Inside your loop, add another for loop over several gradient magnitude thresholds: [2/256], [4/256], [8/256], [12/256].
    Hint: For an easier to read file name, you may wish to loop over the numerators and use the denominator when calculating the threshold.
  4. The command subplot(m,n,p) breaks a figure window into an m×n array of axes and set the pth axis as the current. For instance, The following table shows the values of p where m is 2 and n is 3.
    123
    456

    Inside your inner loop (over thresholds), add commands to
    After your inner loop, use the print to save the figure's array of (binary) thresholded images as a PNG file.
You should now have a total of 15 = 5 (magnitude) + 5 (orientation) + 5 (edge) images.

F. Analysis

There should not be any more Matlab work for you do. All that remains is some analysis of your results.
  1. How do the magnitude images change as the scale increases?
  2. How do the orientation images change as the scale increases?
  3. How do the detection images change with scale and threshold? Note: these are not independent; consider them together. What happens as each changes? (For example consider the four extrema along both axes.)

Copyright © 2010 Jerod Weinman..
ccbyncsa.png
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.