Project Two
|
Project 2: Menger Sponge
Assigned: Thursday, Sept 29, 2016
Due: Thursday, Oct 13, 2016 (by 11:59PM)
|
A reference implementation of
the Menger sponge, including shading of cube and
floor.
|
In this project, you will build an OpenGL program that shows a
procedurally-generated fractal, the Menger sponge, with user
interaction using mouse and keyboard camera controls. You will also be
introduced to texturing by writing a simple shader that draws a
checkerboard pattern on an infinite ground plane.
Starter Code
As before, we have provided you with
starter code, written in C++ and
using OpenGL and several OpenGL helper libraries (glm, glfw) as a
starting point for implementing this project. This code is a working
example of how to set up OpenGL and a shader program to draw a single
red triangle on the screen. We have also implemented routines to
detect and respond to keyboard and mouse input using GLFW; you will be
hooking into this code to update the camera in real time. If you
do not see a red triangle when you run the code, your hardware or
drivers do not support the OpenGL features needed for this
assignment. The code has been tested and works on the 3rd floor
GDC lab machines. Finally, we have provided a reference solution of
how this project should function if you implement it correctly. If you
have questions or doubts about how your program should behave, the
reference solution is a first place to check.
Building and Running the Starter Code
We have provided cmake scripts to automate compiling and running the
assignment code. Please do not modify the build process as it
has been designed to streamline grading.
To compile your code, type
mkdir build
cd build
cmake ..
make
to generate an executable in the bin subdirectory.
The starter code and assignment has been designed so that you can
complete it modularly: the starter code contains no camera
controls, but has a hard-coded camera that will allow you to see the
Menger sponge, if you implement it correctly. Similarly you can
implement and test the camera control portion of this assignment
before adding the ground plane or modifying any shaders.
Submission Instructions
All submissions must be uploaded to Canvas by the project due date and
time. In exceptional cases where Canvas is not working, you may turn
in the assignment by emailing the TA a Dropbox link, handing him a USB
stick, etc, but there will be no exceptions to the project due
date--late submissions will be penalize per the syllabus late policy.
Submit your project as a single .tgz file. This file should
be free of temporary files, core dumps, etc; you also should not
submit the (large) mesh files included in the starter code. For
example, you can execute the following command from the parent
folder containing the code to create an appropriate .tgz
file:
tar --exclude obj --exclude build --exclude ".*" -czf submission.tgz foldername
where foldername is the name of the folder containing your code.
In addition to your complete source code, the .tgz file
should include a README file containing, at minimum:
- Your name;
- An explanation of any required features that you know are not
working completely correctly, but would like to have considered for
partial credit;
- Anything the spec below asks you to include.
All code ***MUST*** compile and run on the 3rd floor GDC computer lab Linux machines!!!
In particular,
- Do not submit a 7zip of a Visual Studio .soln file;
- Do not include any absolute paths in your Makefiles that point to
your home folder;
- If you use any external libraries (which must be approved in
advance by the instructor) that are not installed by default on the
GDC computers, you must include the library source files with your
submission, and include building of the library in your build process.
Please triple-check that your submission adheres to these
instructions. The instructor or TA reserves the right to return,
ungraded, any submissions that fail to compile or run, or that do not
include a README file.
Tip: After uploading your files to Canvas, it is not a bad idea to
download them onto a different computer and check that it still
compiles and runs, in case you accidentally leave out some important
files from the .tgz. It is also wise to try to upload to Canvas well
ahead of the final deadline: again, there will be no exceptions
to the lateness policy, not even for technical difficulties.
Grading
The assignment is broken down into several features described
below. Each feature is worth a number of points and will be graded
separately (although later feature may build on and require a working
implementation of earlier features). Your code should be correct and
free of segfaults, deadlocks, or other buggy behavior. You will
not be graded on software engineering (use design patterns and
unit tests if they help you) but unreadable code is less likely to
receive partial credit. You should not worry too much about
performance or optimizing your code, but egregiously inefficient
implementations of any of the features will be penalized.
Required Features (136 points)
If you look at the source code, you will find it is organized into the following sections:
Structure of the Starter Code
Set up OpenGL context
Load geometry to render
Create Vertex Array Objects and Vertex Buffer Objects Create shader program
Compile shaders and attach to shader program
Link shader program
Create uniform (global) variables
while true do
Clear screen
Set up camera and light
Tell OpenGL what shader program to use
Tell OpenGL what to render
Render!
Swap buffers
end while
Each of these steps is described below in more detail.
Set up OpenGL context This boilerplate uses a helper library
(glfw) to create an OpenGL context (window), request version 3.30 of
the OpenGL shader language, prints some diagnostic information to the
console, and performs some magic to try to make the starter code run
correctly on a wider range of hardware. You can completely ignore this
section for now.
Load geometry to render The geometry in this assignment
consists of only two primitives: vertices, which are points in space,
and triangles that connect triples of vertices. The triangles are
specified using three integers, which are (zero-indexed) indices into
the list of vertices. The starter code declares one vector each to
store the vertices and triangles, and calls the CreateTriangle
function to fill them, which creates three vertices, and a single
triangle connecting them. You will be completely replacing this
geometry with your own Menger sponge geometry.
Create Vertex Array Objects and Vertex Buffer Objects The
geometry loaded in the previous step is stored in system RAM - in
order to render the vertices and triangles, that data must be bussed
to the GPU. OpenGL will handle this automatically, but you have to
tell OpenGL what and where the data is. A Vertex Array Object fulfills
this purpose: it's a container that contains data that needs to be
shared between the CPU and GPU. The starter code creates a VAO and
fills it with three arrays (Vertex Buffer Objects): one for the list
of vertices, one for the list of vertex normals, and one for the list
of triangles. Notice that the calls to glBufferData specify the data
type and number of elements for each of these lists; you should not
need to modify these lines if you add more vertices of triangles to
the scene before this call. If the number of elements is later changed
(for example, because the user specifies a new recursion level for the
cube) the VBOs need to be updated; we do this for you automatically
within the render loop.
There is another important function in this step. The GPU doesn't keep
track of variables using names - all GPU variables are labeled with
integers (called locations). When we write C++ code, we do give a name
to the array of vertex positions (in this case obj vertices), and in
the vertex shader, we will also give a name to the position variable
(vertex position). Part of compiling a shader is telling OpenGL which
C++ variables, and which GLSL variables, correspond to which variable
numbers on the GPU. glVertexAttribPointer tells OpenGL that the VBO
just created from obj vertices is vertex attribute number
zero. Another such call tells OpenGL that the VBO containing vertex
normals is attribute number one. Later on we will tell OpenGL that
vertex attribute zero should be associated with the GSLS variable
vertex_position and that vertex attribute one should be associated
with GLSL variable vertex_normal. In a more complicated example, we
might have more VBOs - color, for example - and these would be
numbered vertex attribute two, etc.
Create shader program A shader program is a GPU program for
turning geometry into pixels. A more sophisticated example might use
many shader programs: one to render glass, one to render rock, another
for fire, etc. In this simple example, there is only one shader
program that will be used to render all of the geometry.
Compile shaders and attach to shader program As mentioned in
class, each shader programs contains several shaders that play
different roles in the graphics pipeline. This assignment uses three
shaders: a vertex shader and two fragment shaders, which are described
below. This block of code compiles all of the shaders and adds them to
the shader program.
Link shader program This line finalizes the shader program;
after this point the shader program can be used in the rendering loop
to do rendering. As mentioned above, it is also necessary to tell
OpenGL what the location numbers are of the GLSL variables;
glBindAttribLocation does this for the vertex position buffer here.
Create uniform (global) variables Above we used VBOs to
transfer data to the GPU. There is second way: you can specify uniform
(global) variables that are sent to the GPU and can be used in any of
the shaders. Like vertex attributes, these uniform variables are
numbered. If we want to refer to the global variables from the C++
code, we need to look up their numbers. For example, the vertex shader
declaresa light position uniform variables; perhaps it gets assigned
to uniform variable number zero. The last glGetUniformLocation call
looks up that number and stores it so that we can modify the variable
from the C++ code later.
Clear screen We are now inside the render
loop: everything above this point is run only once, and everything
below is run every frame. The first thing the starter code does is
clear the framebuffer; as mentioned in class, the framebuffer stores
more data than just color, and this block of code clears both the
color buffer (setting the whole screen to black) and depth buffer.
Set up camera and light In order to see anything, you need to
specify a camera through which to view the scene, and some lights to
light up the geometry. We have set up a perspective camera and a
single light for you. You don't need to touch these lines: what you
should take away, though, is that these lines create global matrices
and vectors that are later passed to the GPU and used inside the
shaders. The view matrix, which stores the camera's position in the
world, is updated as the camera moves. You will be updating this code
below.
Tell OpenGL what shader program to use We only have one, so
there is not much choice. Later you will write a second shader program
for rendering the floor.
Tell OpenGL what to render During initialization we set up some
VBOs and uniform variables. Here we tell the GPU which VAO the shader
program will use, then send the data in the VBOs (the vertices and
faces) to the GPU. Hooking up the wrong VAOs/VBOs with the wrong
shader program is a classic source of silent failure to render the
right geometry.
Render! glDrawElements runs our shader program and rasterizes
one frame into the framebuffer.
Swap buffers As mentioned in class, double-buffering is very
commonly used to avoid flickering and tearing. This call swaps the
buffers.
To complete this assignment, you do not have to understand all of the
above; there are only a few steps that need to be changed to render
the floor and cube. But you should familiarize yourself with how the
example program is structured, as future projects will build on this
skeleton.
Understanding the GLSL Shader(0 points)
The example shader program contains three shaders: a vertex shader and
two fragment shaders. Please refer to the lecture slides for
information on how these shaders are related to each other in the
graphics pipeline.
The vertex shader is executed separately on each vertex to be
rendered, and its main role is to transform vertex positions before
further processing. It takes as input the vertex's position (which, as
explained above, is passed in from the C++ array of vertex positions)
and writes to gl_Position, a built-in variable that stores the
transformed position of the vertex. This is the position that will be
passed on to the rest of the rendering pipeline. Right now the shader
simply converts vertex positions to clipping coordinates. It also
computes the direction from the vertex to the light in camera
coordinates and transforms the vertex normal to camera coordinates.
These are used by the fragment shader to do shading. There is no need
to transform these beyond camera coordinates (so the projection matrix
isn't used on them) since the shading can be done in camera
coordinates. You won't need to modify this shader in this assignment.
The fragment shader is called once per pixel during
rasterization of each triangle. It diffuse-shades each triangle, so
that faces that point towards the light are brighter than faces at an
angle to the light. This code uses the normals previously computed and
passed through the pipeline to it. You don't need to modify the
cube's fragment shader in this assignment. We have provided a skeleton
second fragment shader, which you will use to texture the ground
plane.
Creating the Menger Sponge (37 points)
The Menger sponge, for L = 0
to 3 (left to right).
Notice the fractal nature of the
sponge.
|
Your first task in this assignment is to procedurally generate the
Menger sponge illustrated above. The sponge is an
example of a fractal -- its geometry is self-similar across scales, so
that if you zoom in on one part of the sponge, it looks the same as
the whole sponge. The full sponge is thus infinitely detailed, and
cannot be rendered or manufactured; however, better and better
approximations of the sponge can be created using a recursive
algorithm. Let L be the nesting level of the sponge; as L
increases, the sponge becomes more detailed. Given L, the algorithm
for building the L-level sponge is listed below.
Sponge generation code
Start with a single cube
for i = 0 to L do
for each cube do
Subdivide cube into 27 subcubes of one third the side length
Delete the 7 inner subcubes (see figure, second from the left).
end for
end for
Notice that for L = 0, the Menger sponge is simply a cube. You
will write code to generate the Menger sponge for 0 ≤ L and
modify the keyboard callback so that pressing any key from '1' to '4'
draws the appropriate sponge on the screen. All sponges should fill a
bounding box whose diametrically-opposite corners are (m,m,m) and
(M,M,M), where m = -0.5 and M = 0.5.
- (7 points) Write a drawCube(vertices, faces, minx,
miny, minz, maxx, maxy, maxz) function which, given a list of
existing vertices (in homogeneous coordinates) and triangle indices,
appends the vertices and triangles necessary to draw a cube with
diametrically-opposite corners at (minx, miny, minz) and
(maxx, maxy, maxz). This will entail appending the eight cube
vertices, and enough triangles to draw all six faces of the cube.
You need to ensure that the faces you append have outward-pointing
normals. This means that for every triangle (i,j,k) connecting
vertices vi,vj,vk, the triangle
normal (vj-vi) x
(vk-vi) must point *away* from the
cube. Incorrectly-oriented triangles can be fixed by permuting the
order of the triangle's vertex indices. If triangles are not oriented
correctly, they will appear incorrectly shaded (probably black) when
rendered, because they are "inside out."
- (23 points) Modify CreateMenger to create,
given L, the Menger sponge of level L, properly
positioned and scaled in space as described above, and to populate
vertices and faces with the vertices and triangles
of the sponge. The simplest implementation of the above pseudocode is
probably to first compute where to place all cubes, and then
independently add each cube's vertices and faces using
drawCube. This approach is somewhat inefficient, since
neighboring cubes will contain duplicated vertices, and will also lead
to some mild rendering artifacts at the seams between cubes. You are
neither required to nor forbidden from fixing these issues by fusing
together duplicated vertices.
- (7 points) Modify the keyboard callback so that pressing
any key from '0' to '4' generates and displays a Menger sponge of the
appropriate level L. The geometry should only be
recreated when one of these keys is pressed -- do not
procedurally generate the cube every frame! (Hint: any time you
change the vertex or triangle list, you need to inform OpenGL about
the new data by binding the vertex and triangle VBOs using
glBindBuffer, passing the new data to the GPU using
glBufferData, etc.) The skeleton code will do most of this
for you, if you slot in your Menger-generation code in the right
place.
You will need to implement a perspective camera and camera controls to
fully appreciate the Menger sponge; however, the sample code provides
a hard-coded orthographic camera, so that even before moving onto the
next part of the assignment, you can get a sense for whether the sponge is
being generated correctly.
Camera Controls (76 points)
Next, you will implement a perspective camera, and keyboard and mouse
controls that let you navigate around the world. As discussed in
class, the major pieces needed for setting up a camera are 1) a
view matrix, which transforms from world coordinates to camera
coordinates; as the camera moves around the world, the view matrix
changes; 2) a perspective matrix, which transforms from 3D
camera coordinates to 2D screen coordinates using a perspective
projection, and 3) logic to move the camera in response to user input.
The camera's position and orientation is encoded using a set of points
and vectors in world space:
- the eye, or position of the camera in space;
- the look direction that the camera is pointing;
- the look direction is not enough to specify the orientation of
the camera completely, since the camera is still free to spin about
the look axis. The up vector pins down this rotation by specifying the
direction that should correspond to "up" in screen coordinate
(negative y direction);
- the tangent or "right" direction of the camera. Notice that
this direction is completely determined by the cross product of the
look and up directions;
- the camera_distance, the distance from the camera to the point
that the camera is focusing on; this distance controls the camera
zoom;
- the center point, the point on which the camera is
focusing. Notice that center = eye + camera_distance * look.
The following default values should be used to initialize the camera
at program start. It will ensure that the Menger sponge begins within
the camera's field of view:
- eye = (0,0,camera_distance);
- look = (0,0,-1);
- up = (0,1,0).
The other default values can be computed from these.
The camera coordinate system
(left) and its relationship to screen coordinates (right).
The
camera's orientation is represented as an orthonormal frame of three
orthogonal unit vectors. Note that the origin of the screen's
coordinate system is the top left
|
You will implement two types of cameras: an orbital camera,
where center remains fixed in space as the user moves the
mouse (so that the camera swivels around, or orbits, the center
point), and a first-person shooter (FPS) camera, where the
eye is fixed and the center moves around as the user
moves the mouse. The user can toggle the type of camera by pressing
'c'. The start code already includes a keyboard callback which updates
the global boolean fps_mode whenever the key is pressed.
- (8 pts) Write code to create and initialize the above
camera parameters, and implement the LookAt function to
create a rotation transforming world coordinates to camera
coordinates, as determined by the current values of eye,
center, and up. You may use glm helper
functions like cross, normalize, etc to help you,
but you may not simply call glm::lookAt, gluLookAt,
or some other equivalent function to do all of the work for
you. (Solutions that do so will receive no credit for this step).
We have provided a glm function that builds the perspective
projection matrix mapping camera coordinates to
screen coordinates, given values of the camera field of view,
aspect ratio, and near and far clipping plane distances (reasonable
defaults have been hard-coded). You do not need to modify the
perspective matrix.
- (20 pts) Implement camera rotation when left-clicking
the mouse and dragging. The starter contains a mouse callback that
tracks the mouse position during dragging, and records the mouse drag
direction in screen coordinates in the vector
mouse_direction. Calculate the corresponding vector in world
coordinates; the camera should swivel in the direction of dragging,
i.e. should rotate about an axis perpendicular to this vector
and to the look direction. Rotate by an angle of
rotation_speed radians each frame that the mouse is
dragged. Hint: The function glm::rotate can be used to
rotate a vector using a rotation expressed in axis-angle
representation. Do not trust the glm documentation about
whether this function takes in radians or degrees - it has been known
to be incorrect in the past. Check the glm implementation.
After rotating the camera's orthonormal frame here, and in any of
the operations described below, you will need to recompute either the
eye (if in orbital mode) or center (if in FPS mode)
to account for the camera's new orientation.
- (7 pts) Implement a zoom function when right-clicking
and dragging the mouse. Dragging the mouse vertically upward should
zoom in (decrease the camera_distance) by zoom_speed
each frame that the mouse is dragged, and dragging downward should
zoom out by the same speed. You are encouraged, but not required, to
fix the classic bug where zooming in too far inverts the world.
- (7pts) Implement the behavior of the 'w' and 's'
keys. In orbital mode, these keys should zoom the camera in and out
(respectively) exactly as in right-clicking and dragging the mouse,
i.e., the camera should zoom in by zoom_speed each frame that the 'w'
key is held down.
In FPS mode, holding down the 'w' or 's' key should translate both the
eye and center by ±zoom_speed times the look direction.
- (7 pts) Implement the behavior of holding down the 'a'
and 'd' keys. In orbital mode, these keys shift the point that the
camera is orbiting left or right (respectively) with respect to the
camera's frame, i.e., shift the center by ∓pan_speed
times the camera tangent direction.
In FPS mode, the keys should strafe, i.e. translate the eye instead.
Both operations will look the same to the user.
- (20 pts) Pressing the left and right arrow keys should
roll the camera: spin it counterclockwise or clockwise
(respectively) by roll_speed radians per frame.
- (7 pts) Holding down the up or down arrow keys should
translate the point being orbited (orbit mode) or the camera position
(FPS mode), as with the 'a' and 'd' keys, but this time in the
±up direction (with the translation speed being again
pan_speed units per frame).
As implementing camera controls can be subtle and it is easy to apply
the wrong rotation, transform using the wrong coordinates, etc, I
strongly encourage you to make regular use of the reference executable
to verify that your camera behavior matches up with the reference.
Some Shader Effects (23 points)
Finally, you will do some shader programming to liven up the
rendering of the cube and place a ground plane on the screen so that
the user has a point of reference when moving around the world.
- (7 pts) Color the different faces of the cube based on
what direction the face normal is pointing in world
coordinates. Notice that the face normals will be axis-aligned no
matter how the user moves or spins the camera, since transforming the
camera does not move the cube in world coordinates. Refer to the
following table for how to color the faces:
Normal | | Color |
(1,0,0) | | red |
(0,1,0) | | green |
(0,0,1) | | blue |
(-1,0,0) | | red |
(0,-1,0) | | green |
(0,0,-1) | | blue |
All coloring code should take place in the fragment shader. The above
colors are the base colors; leave intact the shading of the faces due
to orientation of the face relative to the light source.
- (8 pts) Add a ground plane to the scene. This plane should be
placed at y = -2.0 and extend infinitely in the x and
z directions. Rendering of this plane should be completely
separate from that of the cube: create new lists of vertices and
triangles for the infinite plane, new VBOs for sending this geometry
to the GPU, and a new shader program for rendering the plane. This
shader program can reuse the cube's vertex shader, but should have its
own fragment shader.
You should be able to represent the geometry of the infinite plane
using only a small (no more than 4) number of triangles. Hint: think
in homogeneous coordinates.
- (8 pts) Write a fragment shader for the ground plane, so
that the plane is colored using a checkerboard pattern. The
checkerboard should be aligned to the x and z axes (in
world coordinates), and each grid cell should have size 1.0 x
1.0. The cell base color should alternate white and black (again,
these are base colors, and you should reuse the cube shader's shading
code to dim pixels facing away from the light).
There are many ways of coding the checkerboard pattern; use any method
you like. You may find the following GLSL functions useful:
mod, clamp, floor,
sin. Hint: the fragment shader, by default, does not
have access to the position of pixels in world coordinates (since that
information was lost earlier in the pipeline when the vertex positions
are multiplied by the view and perspective matrices). You may find it
useful to pass the untransformed world coordinates to the fragment
shader.
Reminder about Collaboration Policy and Academic Honesty
You are free to talk to other students about the algorithms and theory
involved in this assignment, or to get help debugging, but all code
you submit (except for the starter code, and the external libraries
included in the starter code) must be your own. Please see the
syllabus for the complete collaboration policy.
Last modified: 10/04/16
by Don Fussell
fussell@cs.utexas.edu