OpenGL Basics


OpenGL Basics

If you look at the source code of Menger Sponge, you will find it is organized into the following sections:

Set up OpenGL context
Load geometry to render
Create Vertex Array Objects and Vertex Buffer Objects
Create shader program
Compile shaders and attach to shader program
Link shader program
Create uniform (global) variables
WHILE TRUE DO
    Clear screen
    Set up camera and light
    Tell OpenGL what shader program to use
    Tell OpenGL what to render
Render!
Swap buffers
END WHILE

Each of these steps is described below in more detail.

Set up OpenGL context

This boilerplate uses a helper library (glfw) to create an OpenGL context, requests version 4.1 of the OpenGL shader language, prints some diagnostic information to the console, and performs some Magic to try to make the starter code run correctly on a wider range of hardware. You can completely ignore this section for now.

Load geometry to render

The geometry in this assignment consists of only two primitives: vertices, which are points in space, and triangles that connect triples of vertices. The triangles are specified using three integers, which are (zero-indexed) indices into the list of vertices. The starter code declares one vector each to store the vertices and triangles, and calls the CreateTriangle function to fill them, which creates three vertices, and a single triangle connecting them. You will be completely replacing this geometry with your own Menger sponge geometry.

Create Vertex Array Objects (VAOs) and Vertex Buffer Objects (VBOs)

The geometry loaded in the previous step is stored in system RAM—in order to render the vertices and triangles, that data must be bussed to the GPU. OpenGL will handle this automatically, but you have to tell OpenGL what and where the data is. A Vertex Array Object fulfills this purpose: it’s a container that contains data that needs to be shared between the CPU and GPU. The starter code creates a VAO and fills it with two arrays (Vertex Buffer Objects): one for the list of vertices, and one for the list of triangles. Notice that the calls to glBufferData specify the data type and number of elements for each of these lists; you should not need to modify these lines if you add more vertices of triangles to the scene before this call. If the number of elements is later changed (for example, because the user specifies a new recursion level for the cube) the VBOs need to be updated; we do this for you automatically within the render loop.

There is another important function in this step. The GPU doesn’t keep track of variables using names—all GPU variables are labeled with integers (called locations). When we write C++ code, we do give a name to the array of vertex positions (in this case obj vertices), and in the vertex shader, we will also give a name to the position variable (vertex position). Part of compiling a shader is telling OpenGL which C++ variables, and which GLSL variables, correspond to which variable numbers on the GPU. glVertexAttribPointer tells OpenGL that the VBO just created from obj vertices is vertex attribute number zero. Later on we will tell OpenGL that vertex attribute zero should be associated with the GSLS variable vertex position. In a more complicated example, we might have more VBOs—color, for example—and these would be numbered vertex attribute one, etc.

Create shader program

A shader program is a GPU program for turning geometry into pixels. A more sophisticated example might use many shader programs: one to render glass, one to render rock, another for fire, etc. In this simple example, there is only one shader program that will be used to render all of the geometry.

Compile shaders and attach to shader program

As mentioned in class, each shader programs contains several shaders that play different roles in the graphics pipeline. This assignment uses four shaders: a vertex shader, geometry shader, and two fragment shaders, which are described below. This block of code compiles all of the shaders and adds them to the shader program.

This line finalizes the shader program; after this point the shader program can be used in the rendering loop to do rendering. As mentioned above, it is also necessary to tell OpenGL what the location numbers are of the GLSL variables; glBindAttribLocation does this for the vertex position buffer here.

Create uniform (global) variables

Above we used VBOs to transfer data to the GPU. There is second way: you can specify uniform (global) variables that are sent to the GPU and can be used in any of the shaders. Like vertex attributes, these uniform variables are numbered. If we want to refer to the global variables from the C++ code, we need to look up their numbers. For example, the vertex shader declares a light position uniform variable; perhaps it gets assigned to uniform variable number zero. The last glGetUniformLocation call looks up that number and stores it so that we can modify the variable from the C++ code later.

Clear screen

We are now inside the render loop: everything above this point is run only once, and everything below is run every frame. The first thing the starter code does is clear the framebuffer; as mentioned in class, the framebuffer stores more data than just color, and this block of code clears both the color buffer (setting the whole screen to black) and depth buffer.

Set up camera and light

In order to see anything, you need to specify a camera through which to view the scene, and some lights to light up the geometry. We have set up a perspective camera and a single light for you. You don’t need to touch these lines: what you should take away, though, is that these lines create global matrices and vectors that are later passed to the GPU and used inside the shaders. The view matrix, which stores the camera’s position in the world, is updated as the camera moves. You will be updating this code below.

Tell OpenGL what shader program to use

We only have one, so there is not much choice. Later you will write a second shader program for rendering the floor.

Tell OpenGL what to render

During initialization we set up some VBOs and uniform variables. Here we tell the GPU which VAO the shader program will use, then send the data in the VBOs (the vertices and faces) to the GPU. Hooking up the wrong VAOs/VBOs with the wrong shader program is a classic source of silent failure to render the right geometry.

Render!

glDrawElements runs our shader program and rasterizes one frame into the framebuffer.

Swap buffers

As mentioned in class, double-buffering is very commonly used to avoid flickering and tearing. This call swaps the buffers.

To complete this assignment, you do not have to understand all of the above; there are only a few steps that need to be changed to render the floor and cube. But you should familiarize yourself with how the example program is structured, as future projects will build on this skeleton.

Understanding the Starter GLSL Shaders

The example shader program contains four shaders: a vertex shader, a geometry shader, and two fragment shaders. Please refer to the lecture slides for information on how these shaders are related to each other in the graphics pipeline.

The vertex shader is executed separately on each vertex to be rendered, and its main role is to transform vertex positions before further processing. It takes as input the vertex’s position (which, as explained above, is passed in from the C++ array of vertex positions) and writes to gl_Position, a built-in variable that stores the transformed position of the vertex. This is the position that will be passed on to the rest of the rendering pipeline. Right now the shader does nothing except to convert vertex positions to camera coordinates. It also computes the direction from the vertex to the light (in camera coordinates), which is used by the fragment shader to do shading. You won’t need to modify this shader in this milestone.

The geometry shader is run once per triangle in the scene, and tells OpenGL how the output vertices of the vertex shader should be arranged into triangles. The built-in variable gl_in contains the positions of the three vertices that belong to the triangle. The position vectors stored in this buillt-in variable contain four numbers; they are of the form (x, y, z, 1). This padding is due to the use of homogeneous coordinates by the shader. You can extract a vec3 from the 4D vectors by using the xyz method; for example, the position of the triangle’s first vertex is gl_in[0].gl_Position.xyz.

This shader has two jobs: first, it computes the normal of the triangle, which is used to diffuse-shade the triangle. Right now the normal is hard-coded to be (0,0,1); you will modify this to correctly compute the normal direction of each triangle in the cube. The second job of this shader is to compute the 3D positions of the triangle vertices into 2D positions on the screen. This is accomplished using the projection uniform matrix variable.

Finally the fragment shader is called once per pixel during rasterization of each triangle. It diffuse- shades each triangle, so that faces that point towards the light are brighter than faces at an angle to the light. This code assumes that normals are computed correctly by the geometry shader. You don’t need to modify the cueb’s fragment shader in this assignment. We have provided a skeleton second fragment shader, which you will use to texture the ground plane.