3D Realistic Model

One of the objectives of the FASE project is to produce realistic 3D facial animations. There are numerous approaches that can be taken in order to achieve this (take a look at the MAMBO pages for an extensive overview): parameterized models, control point models, spline models, etc. We have chosen the well-known physically based three-layered model by Terzopoulos and Waters [1] and started with an implementation by Ava To.

Model

The model consists of three layers representing the epidermal, fascia and bone layer which reflects the anatomy of human skin. See figure. Muscles are inserted between two fascia nodes or between a fascia and a bone node. Most muscles in the face are of this latter type (zygomatics, frontalis, etc.) but the orbicularis oris (the one around the mouth) is of the first type. Muscles are modeled with a particular width and maximum strength. Also the depth of the interior layers is modeled and not constant around the face.

The layers are build from elements like the one which is showed here. All black dots are masses and every line is modeled by a spring with some (non-constant bi-phasic) elasticity. By that it simulates the elasticity of the skin and tissue. Furthermore, the upper two triangles of such an element form a prism which has a fixed volume (the amount of tissue) and this must remain constant in case nodes are moving.

Physically seen, this model is a coupled spring/mass system and is also handled that way mathematically. For every node a differential equation (Newton's law of motion) has to be solved (we use the explicit Euler method) until the system reaches convergence. Included in the equation are the muscle contractions, the spring forces, volume preservation forces and skull penetration constraints. As the model consists of 292 nodes, 516 volume elements, 3780 interconnecting springs and 54 muscles this is computationally hard.

Coding of facial expressions

As one can imagine facial expressions are a result of one or more contracted muscles, so deforming the upper layer, which is, in normal operation, rendered as the visible skin surface. Input for the model however is not readily expressed in muscle contractions but is coded in a modified MPEG-4 scheme which consists of a number of action units values. Basically one or more action units control a certain feature point on the face (points on the brows, mouth, etc.). The mapping from action units to muscle contractions is non-trivial and a first implementation of this mapping allows us to use action unit input instead of the more low-level muscle contraction input.

Animation

Animation is done by reading consecutive steps from one state to the next sampled at a certain framerate, either supplied via a performer, or generated by our Animation Editor.

Extensions to model

Obviously the layered model is for modeling skin, tissue and muscles in order to get realistic wrinkles and other bulges. This is far from a realistic face: We have added a movable jaw, eyes, eyelids, teeth, etc.

Future enhancements

Contact Han Noot for further information.

References

  1. Y. Lee, D. Terzopoulos, and K. Waters, ``Realistic modeling for facial animation,'' in SIGGRAPH '95 Conference Proceedings (R. Cook, ed.), Computer Grahics, pp. 55-62, ACM SIGGRAPH, Addison Wesley, 1995. Held in Los Angeles, California, 06-11 August 1995.

This page was last modified on
. Maintained by Han Noot, Han Noot.