The objective of the 3-years FASE project, sponsored by STW, started in spring 1997, is to develop a system which can recognize facial movements of a user in front of a camera and use the data thus produced to animate models of the human face. The possible application areas include videophoning and videoconferencing, virtual reality and synthesized actors, animation industry, facial surgical planning and simulation, social user interfaces, lip synchronisation and lip reading for deaf.
The project is a joint effort by two groups, one at the TUD and one at CWI. A detailed description of the entire project might still be found at the FASE project home page.
Here we introduce the research done at CWI. For a quick impression, have a look at the architecture of the system components.
We allow two kinds of facial models:
A focal point of this project is the creation of editing tools by which one can manipulate the animation data (coming from the recognition module or otherwise.
After having implemented the basic tools, currently we are concentrating on:
For this project contact: Han Noot or Zsófia Ruttkay.
For a totally redesigned commercial version of CharToon, contact Paul ten Hagen at Epictoid for further information.