Required Features for Virtual Mannequin Milestone I
Due: Mar 28, 2024
This page requires JavaScript to display formulas correctly.
Virtual Mannequin Milesone 1
The assignment is broken down into several features described below. Each feature is worth a number of points and will be graded separately (although later feature may build on and require a working implementation of earlier features). Your code should be correct and free of Javascript errors or other buggy behavior. You will not be graded on software engineering (use design patterns and unit tests if they help you) but unreadable code is less likely to receive partial credit. You should not worry too much about performance or optimizing your code, but egregiously inefficient implementations of any of the features will be penalized.
This project does not come with automated tests; however you can interact with a reference implementation of the assignment here.
(0 pts) Sanity Check and Build
Your submitted archive should build smoothly when make-skinning.py
is run and should yield a web package that executes without fatal errors on common browsers (Chrome; Firefox; etc).
Code that does not build or run will lose anywhere between 5 to 100 points, depending on the time and effort it requires the TA to fix your code.
It is your responsibility to thoroughly test your submission.
(0 pts) The Skeleton Data Structure
When digital artists create new animatable characters, they need to create several assets:
- a character model---a triangle mesh specifying the geometry of the character in its rest, neutral pose. Clothing may be directly modeled on top of the character, or created separately and then posed and draped on the character. In this assignment, each scene contains only a single model with a single mesh.
- a character rig---a hierarchical set of bones embedded in the character, and used to deform the model into new poses. These bones are not necessarily anatomically correct, but rather, are designed to give the artist intuitive control over the character's motion.
- a skinning of the character: a way to map from bone deformations to character deformations. In this project, you will implement linear-blend skinning, described below, which is the simplest possible such map. Linear-blend skinning requires a set of artist-provided skinning weights, which specify how much influence each bone has on each point on the character model.
In this project, all data related to each scene is stored in a Collada container file. Blender (available for free on all operating systems) should be able to open all of the scenes used in this project, in case you'd like to inspect the mesh or rig for yourself in a full-featured 3D modeling app. The Collada file contains the model itself, represented as a triangle mesh along with some information about the materials and textures to use for that mesh, as well as the skeleton rig and blending weights.
We have provided code for parsing and rendering the model mesh, as well as rudimentary data structures and a parser for grabbing the skeleton bones and joints, and the mesh skinning weights, from the Collada file. The OpenGL boilerplate has been consolidated somewhat within a RenderPass class. We've also implemented a simple shader which draws the skeleton for you (in its rest, unrotated pose) on the screen.
Your first task will be to inspect the skeleton and geometry data structures in Scene.ts
, as well as the starter shaders in Shaders.ts
, to understand how these data structures represent the rig and the relationship between bones and joints.
The rig in this project is represented as a forest of bones, with an implicit joint at the first endpoint of each bone. Please note that there might be multiple root bones (bones with no parent, who are not the children of any other bones). A bone might have multiple children (for example, a bone representing a hand might connect to five finger bones). Finally, each bone is not necessary attached to the endpoint of its parent bone; the joint of the child bone can be located at an arbitrary point in the local coordinates of the parent bone (for example, the thigh bones in the robot model are not directly attached to the robot's spine, but are instead offset to the side to account for the hips).
The provided data structures contain the starting and ending endpoint of each bone in world coordinates, in the skeleton's rest pose. You will need to convert between coordinate systems, as needed, to implement rigging and skinning.
Starting from the rest pose, in an animation system the artist (you!) deforms the character by manipulating the bones. Every joint (including the root joints) can freely rotate. (In a real anatomical model, some joints have limits on the direction or extent of rotation; for example an elbow can only swivel back and forth about a single axis. You will not enforce any such limits in this assignment.)
If a joint rotates, the bone attached to the joint, as well as all descendent bones, should rotate with the joint: the skeleton is not allowed to disarticulate during character motion. Note that rotations recursively affect child joints: for instance, if I rotate my shoulder, my entire arm rotates, even though I am not rotating my elbow. If I rotate my elbow, my forearm and hand rotate (but not my upper arm). If I rotate my wrist, the hand moves, as well as all of the fingers. In other words, the hierarchical skeleton forest describes how transformations of the joints relate to each other: applying a transformation to a node of the forest (representing one joint) also transforms all nodes that are descendents of that node.
More formally, each joint needs to keep track of an affine transformation describing the rotation of that joint with respect to the parent's transformation. Call this transformation \(R_i\), for joint \(i\).
Each joint also has a local coordinate system, which allows you to represent points attached to that joint in a pose-independent way. For example, supppose (1,0,0) are the coordinates of a point on your forearm, in the elbow's coordinate system. Rotating your elbow, shoulder, or torso moves that point through space, but does not change the location of the point on the forearm with respect to the elbow, so no matter how you change the body's pose, the local coordinates of the point stay (1,0,0). The world coordinates, obviously, do change. These local coordinates allow us to skin the skeleton: if we represent every point on the character model in terms of local coordinates of the skeleton, the model will automatically move as we change the orientations of the skeleton joints.
But before we can do skinning, we need a way to map between local coordinates and world coordinates. Suppose joint \(i\) is a root joint, whose rest position is at world coordinate location \(p_i\). Then the matrix \(D_i\) mapping from the root joint's local coordinates, to world coordinates, is
(Why this matrix? Think of a point \(q\) expressed in the root joint's local coordinates. To map to world coordinates, first rotate \(q\) according the root joint's local transformation marix \(R_i\). Then translate the point by \(p_i\), since the root joint's origin is located at \(p_i\).)
What about non-root joints? Suppose joint \(i\) has parent joint \(j\). The to map from joint \(i\)'s local coordinates to world coordinates, we need matrix
where \(T_{ji}\) is translation by the offset vector from joint \(j\) (in the rest pose) to joint \(i\). This matrix should make sense: given a point \(q\) expressed in joint \(i\)'s local coordinates, you first transform \(q\) using the joint's local transformation matrix \(R_i\), then translate the result so that the origin of the coordinate system is now joint \(j\), then recursively apply \(D_j\), which maps from parent joint \(j\)'s local coordinates to world coordinates. Notice that as you change the \(R_i\)s, the \(D_i\)s also change, both for node \(i\) and all descent nodes.
So, in summary: you will need to adapt the Bones data structure as you like in order to store and manipulate the hierarchical relationship between bones in the model. One possible implementation is to store a local transformation \(R_i\) for each joint(/bone), and support updating these transformation after the skeleton has been initialized. The rest pose should correspond to \(R_i = I\). The skeleton forest will also need methods which, for each each joint, computes \(U_i\), the map from joint i's local coordinates to world coordinates in the undeformed, rest model; and \(D_i\), the map from joint i's local coordinates to world coordinates in the current, deformed model, as specified by the \(R_i\)s. (Hint: \(U_i\) should be computed equivalently to \(D_i\), but using the identity matrix in place of the \(R_i\)s).
There are other potential approaches, and you are free to solve the rigging task in whichever way seems easiest to you.
(15 pts) Bone Picking
As the user moves the mouse around the screen, you should highlight nearby bones; these highlighted bones indicate to the user which joints will be affected when the user clicks and drags the mouse. Implement functionality that determines, whenever the user moves the mouse, which bone should be highlighted:
- (5 points) Each time the user moves the mouse, convert the position of the mouse cursor in screen coordinates to a ray in world coordinates, whose origin \(p\) is at the eye's position, and whose direction \(v\) is such that the ray from \(p\) in the \(v\) direction pierces the near plane at the pixel containing the mouse cursor.
- (5 points) The bones, when rendered as line segments, are very thin and hard to click on; to make the user interface easier to use, you will treat the bones into cylinders of some reasonable radius for the purposes of detecting which bone the user intends to click on. The details of ray-cylinder intersection are up to you (you might be able to reuse some formulas your ray tracer). If the ray hits one or more cylinders, the first such cylinder hit becomes the highlighted cylinder. Otherwise, no cylinders are highlighted.
- (5 points) Visualize highlighted bones in an obvious way (for example, by drawing them in a different, noticeable color). The details of how to highlight bones being hovered over with the mouse is up to you; you might be able to modify the existing skeleton shaders to do the highlighting, or maybe it's easier to add your own RenderPass for drawing the highlighted bone.
(20 pts) Bone Manipulation
When the user clicks and drags the mouse, the camera rotates, as in Menger Sponge. You will implement a second mode, toggled by clicking the left mouse button while hovering over a highlighted bone, where dragging the mouse rotates the highlighted bone, by changing the \(R_i\) matrix of the parent joint of the bone. Note that rotating a bone should also rotate all child bones attached to the highlighted bone.
Left-clicking and dragging the mouse should rotate the bone (by changing one of the skeleton's \(R_i\)s). Since creating an intuitive UI for bone manipulation is not an exact science, we leave the details of how to apply these transformations to you: it should be possible using your program to place any bone into any arbitrary orientation using reasonably intuitive controls.
The left and right arrow keys, when pressed while highlighting a bone, should roll the bone (at rollSpeed
) instead of moving the camera. (If no bone is currently highlighted, the keys should translate the camera as normal.)
For five points of extra credit, also implement a way to translate the root joints using the mouse UI. Document how to use this feature in your README.
(15 pts) Linear-blend Skinning
You now have all the machinery needed to implement linear blend skinning, as discussed in class. The instructions below walk you through how to this using the provided weights in the Collada file, which specify how much influence each joint has on each vertex of the model mesh.
In principle, there are a dense set of weights \(w_{ij}\), encoding the influence of joint \(i\) on vertex \(j\). In practice, only up to four bones influence each vertex. We have provided code for reading skinning weights from the Collada file (as well as the integer indices specifying which bones influence which vertices).
Every frame, for each joint, use your skeleton forest data structure to compute the overall translation \(U_i\) that maps from the joint's coordinate system to world coordinates for the undeformed (rest pose) model, and the overall rigid motion (translation + rotation) \(D_i\) that maps from the deformed joint's local coordinates to world coordinates. Transform each undeformed vertex \(v_j\) (in world coordinates) of the model using the transformation
to get the current, deformed position of the vertex \(\tilde{v}_j\) (again in world coordinates).
You should implement linear-blend skinning on the GPU, in a vertex shader. The starter precomputes for you the position of each vertex in the coordinate systems of each of the four bones that influence that vertex (stored in the v0
, v1
, etc. vertex attributes). We've also provided some boilerplate for passing in joint translations and rotations (represented as quaternions) into the skeleton vertex shader. You might these helpful when writing your linear-blend skinning algorithm.
TIP: The final code implementing all of the above algorithms should end up being relatively simple, even if skinning is conceptually tricky to wrap your mind around at first. If you are struggling with a bunch of matrices, or nested double recursions, you are probably overcomplicating the problem setup.
Optional Features
You may implement optional features for extra credit (these options will also remain available for Milestone 2). The list below contains pre-approved optional features (worth five points per 🎐 and ten points per 🔔)
All optional features must be fully described in your README file (including instructions for how to invoke the feature behavior) to receive credit.
🎐 Extend the GUI to allow not only bone rotations, but also translation of the root joints.
🎐 Implement texture-mapping. This will require you to use the provided image loaders to read in a bitmap from file, load the texture into the GPU using the provided RenderPass texture functionality, and then add a texture-mapping fragment shader that samples the bitmap. I've provided a texture map for the simple cube (scene 4) if you'd like a test case.
🔔 Find or create your own character model, rig, and skinning weights, and create some images showcasing different poses of the character.
🔔 Implement shadow mapping to render the character's shadow on itself, and on the ground.
🔔 🔔 Research and implement dual-quaternion skinning, a more sophisticated and robust skinning algorithm that does not suffer from the "candy-wrapper" artifact.
🔔 🔔 🔔 Create a third-person interactive walking simulator featuring the provided robot. Allow the user to move the robot around the world with the mouse and arrow keys; as the robot moves, animate its legs and arms using a precomputed walk cycle adapted to the user's turning inputs. The third person camera should follow the robot in some reasonable way. Populate the world with objects/obstacles.