This project describes an attempt to create real time 3 -dimensional facial animation using a VRML model of a face. Several approaches to facial animation are investigated, the major standards are assessed, and a decision is made as to which stand ard to follow and which approaches to incorporate. The input data for the animation comes from coordinate points obtained from a facial tracker, and a method of mapping this 2 -dimensional data onto a 3 – dimensional model is investigated and implemented. A n OpenGL representation of the data is used to provide an initial visualisation of the data and the implementation of the VRML model is developed from a simple initial mapping to incorporate more features.
Chapter 1 – Introduction
1.1 Facial Animation
The representation of virtual humans in online virtual environments is becoming more significant due to the increasing number and variety of multimedia applications in use today. A large proportion of this representation is concerned with facial portrayal and animation which “is now attracting more attention than ever before in its 25 years as an identifiable area of computer graphics”. Uses of facial animation include gaming applications, Virtual Reality environments and multimedia titles such as video conferencing . The goal of facial animation is to create realistic movement in real time, and to successfully portray a range of human expressions and emotions. It is also widely used to
produce speech synchronisation.
The implementation of facial animations p rovides some interesting challenges. The human face is constructed of a number of elements such as bone, skin and muscle. As Prem Kalra et al.  state the “complexity of the physical structure of the human face” gives rise to many problems as not only are there a great deal of individual elements, such as bones and muscles, but there is also “an interaction between muscles and bones”. The fact that humans are so used to seeing and interpreting subtle facial movements means that there is very little tolerance for inaccuracies in animation.
According to Waters  the increased realism of facial models and animations has, to a certain extent made this problem worse. Waters claims that as models become more life like, our tolerance for inaccuracies be comes less because we, as humans, are very apt at interpreting subtle facial expressions. Waters states that “If it looks like a person we expect it to behave like a person.” One solution to this according to Waters is to produce facial animations that h ave “non -human
characteristics”. Waters uses the example of cats and dogs stating that due to the fact that we have no familiarity with talking cats and dogs “we are desensitised to imperfections” in their modelling .
The acuteness of human percepti on of the face, combined with the complexity of the facial model makes good results (i.e. realistic animation) hard to produce.
1.2 Standards for Facial Animation
As the field of human animation has grown and has, more recently, become more important due , for example, to the expansion in network communications, various standards for human representation and animation have developed. The two main standards currently in use are H-Anim1.1 and MPEG-4.
H-Anim1.1 was developed by the Humanoid Animation Work ing Group  and was intended to specify “a standard way of representing humanoids in VRML97”. Animation of H -Anim humanoids is based on the breakdown of a humanoid into joints and segments and the altering of
vertices of those segments.
MPEG-4 is a much wider standard designed for the “coding of multimedia scenes” . As Gaspard Breton et al.  state: “Facial animation has only a small role to play in this huge standard”. Animation based on this standard requires vast numbers of parameters to de fine and consequently animate various features.
A more comprehensive description and comparison of these two standards is given in chapter 2.4.
1.3 Focus of this work
This aim of this project is to perform real time 3-D facial animation, and to attempt to represent human expression and emotion. The research carried out for this project is intended to identify and evaluate existing methods for facial animation and provide insight into the most appropriate method to follow, taking into consideration the scope of this work and the limitations, such as time, imposed. This will include assessment of the main standards for facial animation, and an informed choice as to which is the most appropriate in this particular case. Although it was initially intende d to produce some implementation in both of the main standards (H -Anim1.1 and MPEG-4) this proved to be beyond the scope of this project, especially given the relatively short amount of time available for this work. It was found to be more than adequate t o investigate the two standards and make a choice based on this evaluation.
The goals of this project are to gain an understanding of the various approaches to facial animation and to develop an insight into the difficulties associated with producing real istic animation. The aim is then to investigate and implement one possible solution to some of these difficulties.
One of the driving factors behind the solution produced was the nature of the input data. This was generated by a facial tracking system p rovided by Devin and Hogg . The objective is to model as closely as possible the facial expressions and movements of any individual who’s face is the subject of the facial tracker………………..]]>