Required Features for Ray Tracer Milestone 1


Ray Tracer Milesone 1

Due: Feb 6, 2024

The assignment is broken down into several features described below. Each feature is worth a number of points and will be graded separately (although later feature may build on and require a working implementation of earlier features). Your code should be correct and free of segfaults, deadlocks, or other buggy behavior. You will not be graded on software engineering (use design patterns and unit tests if they help you) but unreadable code is less likely to receive partial credit. You should not worry too much about performance or optimizing your code, but egregiously inefficient implementations of any of the features will be penalized.

(0 pts) Sanity Check and Build

Your submitted archive should pass the sancheck.py test and build smoothly with cmake -DCMAKE_BUILD_TYPE=Release. Code that does not build or run will lose anywhere between 0 to 100 points, depending on the time and effort it requires the TA to fix your code. It is your responsibility to thoroughly test your submission, including any changes you make to the build scripts.

You may notice some multi-threading and texture-mapping code dummied out in the starter code—you will implement texture-mapping and multi-threading in Milestone II. These are not required features of Milestone I.

(60 pts) Implement the Whitted illumination model

The Whitted illumination consists of

Trace primary rays through the center of each image pixel. Reflections, refractions, and shadows will require you to recursively trace secondary rays. The recursion depth for reflections and refractions is limited by the “depth” setting in the ray tracer. (This means that to see reflections and refraction, you must set the depth to be greater than zero!)

You only need to handle directional and point light sources, i.e. no area lights, but you should be able to handle multiple lights. Trace shadow rays toward lights, and look for intersections with objects, to render shadows. If shadow rays intersect a semi-transparent object, you should attenuate the light’s color, thereby rendering partial (color-filtered) shadows. Do not reflect or refract shadow rays (correctly handling refraction of a light source through an intermediate object is not at all trivial in a Whitted-style ray tracer, and is best implemented using path tracing or other global illumination technique).

Some equations that will come in handy when writing your shading and ray tracing algorithms are available online at Don Fussell’s course page.

There are several cases where the “correct” behavior of the ray tracer is under-specified, especially given pathological input geometry, and your solution may differ from the reference solution due to design decisions you have made differently. The ray tracing algorithm is, after all, a crude but efficient approximation of true light transport in the physical world. You are not required to exactly match the reference solution on all scenes to receive full credit, provided that you document your design decisions in your README, and that they are reasonable. (On the flip side, implementations that match the reference solution will always be graded as correct, even if the reference solution is buggy). Some sticking points that have arisen in previous semesters:

(10 pts) Triangle-Ray Intersection

The starter code has no triangle-ray intersection implementation. Fill in the triangle intersection code so that your ray tracer can display triangle meshes.

(10 pts) Phong Interpolation of Normals

The skeleton code doesn’t implement Phong interpolation of normals. You need to add code for this (only for meshes with per-vertex normals). Some objects have per-vertex materials too, and you should similarly apply barycentric interpolation (but without renormalization) to these materials.

(20 pts) Basic Anti-aliasing

Once you’ve implemented the shading model and can generate images, you will notice that the images you generated are filled with “jaggies.” You should implement anti-aliasing by super-sampling and averaging down. You should support a variable number of samples per pixel (1, 4, 9 or 16 samples, i.e., 1, 2, 3, or 4 pixels per dimension) as specified by the GUI slider. You need only implement a box filter (i.e., simple averaging) for computing the final pixel color; more sophisticated anti-aliasing methods can be implemented for extra credit.