Skip to main content
  1. Teaching/
  2. Su2024 CS N324E/
  3. teaching/

Project 1: Image Manipulation

Project Description
#

In this project, you will explore per-pixel manipulation in order to create a series of commonly-used filters. You will also implement basic input-handling and image buffering. Your file will open an image within the Processing window upon clicking “Run.” Filters will be applied to this image when the user clicks the number keys. Each filter will be applied individually. That is, filters should not stack, and you will include a key that returns the image to its original state.

Basic Requirements
#

You will implement functionality for the following actions:

  • Image Loading
  • Keyboard Input Handling
  • A kernel/convolution matrix

You will then use this functionality to implement the following image filters:

  • Grayscale
  • Contrast
  • Gaussian blur approximation
  • Edge detection

Note: do not use the set() or get() functions that are built into Processing. Part of this project is learning how to access and manipulate image buffers directly to better build your understanding of how the graphics pipeline works.

Image Loading
#

Provide functionality for loading an image. This code should be flexible and load images of multiple sizes, but the Processing window itself should map to the size of any given image. This can be done using a variation of the following code within the setup() function:

surface.setResizable(true);
PImage image = loadImage("foo.png");
surface.setSize(image.width, image.height);

Note that “foo.png” here is a placeholder name (“foo”, “bar”, and “baz” are common placeholder names in computer science). You should replace it with the name of the actual image you want to load.

Keyboard Input Handling
#

The user should be able to input key commands for 1-4 as well as 0. The functionality of each button is described below with the filter it applies. If you want to make additional filters, include additional key inputs and document all extra work within your assignment doc, so the graders know what to expect.

An Image Kernel
#

This convolution matrix will be applied to all pixels when filters requiring neighborhood information are called. Example code is included in the slides, but there are a couple more caveats:

  1. Your code should handle the edge pixels in some reasonable way – a black border is not acceptable. Potential solutions are discussed in the slides.
  2. Your code should account for the possibility of the convolution outputting negative values or values that are above the 255 “limit” within each color channel. Failing to do this will lead to wrong (and sometimes outright bizarre!) image manipulations.

Grayscale
#

This filter will convert a color image to a grayscale image when the user presses “1”. One way to do this is by averaging across all color channels on a per-pixel basis then assigning that value across all three color channels. This roughly preserves the value even if hue and saturation are discarded.

Contrast
#

This filter will add higher contrast to an image when the user presses “2”. One way to do this is by calculating the brightness (value) of each pixel. If the pixel’s brightness is above a threshold value, additional “brightness” is added. If the pixel’s brightness is below a threshold value, this brightness is subtracted.

You should experiment to find a good threshold value for testing brightness (or multiple threshold values for more subtle results). Choose the brightness to be added and subtracted based on the results of that test.

Gaussian Blur Approximation
#

This filter will approximate a Gaussian Blur when the user presses “3”. Since blurring requires neighborhood knowledge of pixels, it will rely on your kernel, or convolution matrix, to extract and manipulate the necessary pixel information. You will need to buffer the updated image in order to prevent your changes to the image from influencing its neighbors.

One possible Gaussian approximation is to use a weight of 0.25 on the current pixel, 0.125 on cardinal neighboring pixels (north, south, east, west), and 0.0625 on diagonal neighboring pixels (north-east, north-west, south-east, south-west), but you can experiment with these values as long as it resembles a Gaussian drop off (i.e. the most weight is on the current pixel, the total weight equals 1, and cardinal neighbors are weighed more heavily than the more “distant” diagonal neighbors).

Edge Detection
#

This filter will perform edge detection using the Sobel operators when the user presses “4”. This filter also requires neighborhood knowledge of pixels, and therefore will also use your kernel. Again, you will need to buffer the updated image to prevent any changes in pixels from influencing their neighbors.

Remember from the class that there are two Sobel operators: horizontal and vertical. To combine these operators, you will determine each pixel’s updated value in both the vertical and horizontal directions. You can then take the magnitude and assign that value to the final pixel output. To take the magnitude take the square root of the added squared convolution values.

No Filter
#

When the user presses “0” the image should return to its original form. To do this, you might want to create an additional PImage buffer to hold the original image. This can then be swapped into the place of the displaying image (which has the filters applied) when the user requests the original for display.

Project Report
#

Create a project report as specified in the Project Report rules

Language Model Log
#

You should either submit a log of your language model interactions, or a statement that you did not use LMs for this project. In either case, you will need to submit a file in your project on your LLM usage. See the rules on language model logs for details.

Extra Credit
#

Canny Edge Detection
#

The Sobel operators, while effective, tend to create lots of extra non-edge “noise” in the image, and to detect edges that are very wide. The Canny edge detection algorithm, published in 1986, is a more sophisticated edge detection algorithm which removes many of the spurious edges and, more importantly, only marks the center of each edge.

Implement a Canny edge detection filter in your program which outputs edges in white (and non-edge pixels in black). There are multiple variants of the Canny detector: in your report, document your implementation well enough that someone could reasonably reproduce your results without looking at your code. Also document how to enable/use this filter for a user of the program.

You may not use external libraries which implement Canny edge detection.

A Note on Code Organization
#

Please put a little bit of thought into how you want to organize your code. In particular, do not put all of your code in the setup() and draw() functions.

A reasonable setup might be to have a draw() function which figures out which filter to display, and then calls other functions which are responsible for computing that filter.

We don’t expect industrial-quality code organization, but any code which is excessively difficult to read may lose points.

Submission
#

Your submission should include the following:

  1. An Processing file named youreid_project1.pde that loads an image upon clicking “Run.” This image should have keyboard functionality for all filters listed within the project description.
  2. A sample image that will automatically load and display all filter functionality during the testing.
  3. Your project report
  4. Your language model log

Place this all into a directory named youreid_project1, zip the directory, and submit it to Canvas.