Sale!
Placeholder

Gesture Recognition to Control a Virtual Avatar

10,000 3,000

Topic Description

 ALL listed project topics on our website are complete material from chapter 1-5 which are well supervised and approved by lecturers who are intellectual in their various fields of discipline, documented to assist you with complete, quality and well organized researched materials. which should be use as reference or Guild line...  See frequently asked questions and answeres



Summary
The aim of this project was to produce a gesture recognition system capable of recognising a set of
physical gross gestures that can then be imitated by a graphical Avatar. This aim was achieved through
completing the minimum requirements and some of the suggested project enhancements. It was hoped
that the gesture recognition system would be at least 80% correct in recognising user gestures; the
results of this project indicate that this target was exceeded with an average 86.3% of gestures
performed being correctly classified.
A VRML Avatar supplied by the Vision Group of the University of Leeds was adapted to perform two
animated gross gestures, and a second, rather simpler animated Avatar was also created using the
OpenGL toolkit GLUT. The second Avatar was created so that one of the project enhancements could
be fulfilled.
The following report is introduced by providing an outline of the project requirements and a summary
of the project management. An analysis of the background reading performed in order to design the
system is then given, which then leads onto the design of the system itself. The design section explains
the evolutionary development of the product starting from a very basic initial model, noting the
observed weaknesses and attempted solutions at each stage of development. This leads on to the final
implemented product which is then evaluated, after which a conclusion is provided with suggestions
of possible improvements and future work.

Contents
Summary ………………………………………………………………………………………………………………………………. i
1 Introduction …………………………………………………………………………………………………………………. 1
1.1 Aims and Objectives……………………………………………………………………………………. 1
1.2 Minimum Requirements……………………………………………………………………………… 1
1.3 Potential Enhancements ……………………………………………………………………………… 1
1.4 Modular Development ………………………………………………………………………………… 2
1.5 Project Plan………………………………………………………………………………………………… 2
1.5.1 Methodology ……………………………………………………………………………………….. 2
1.5.2 Schedule………………………………………………………………………………………………. 3
2 Background Research …………………………………………………………………………………………………… 4
2.1 Overview ……………………………………………………………………………………………………. 4
2.2 Gesture Recognition……………………………………………………………………………………. 4
2.2.1 Introduction ………………………………………………………………………………………… 4
2.2.2 Capturing Gesture Input (Tracking Technologies) ……………………………….. 4
2.2.2.1 Choice of Input Device……………………………………………………………………… 4
2.2.2.2 A Tracking Glove …………………………………………………………………………….. 5
2.2.2.3 Computer Vision Techniques (Vision Based Gesture Recognition) …….. 5
2.2.3 Image based techniques – Segmentation……………………………………………….. 6
2.2.3.1 Contrast Enhancement …………………………………………………………………….. 6
2.2.3.2 Background Extraction (subtraction) ……………………………………………….. 7
2.2.3.3 Edge or Boundary Detection …………………………………………………………….. 8
2.2.3.4 Region Extraction…………………………………………………………………………….. 8
2.2.3.5 Motion History Images (MHI) ………………………………………………………….. 8
2.2.4 Image based techniques – Feature Extraction……………………………………….. 9
2.2.5 Image based techniques – Classification ……………………………………………… 10
2.2.5.1 Template Matching ………………………………………………………………………… 10
2.2.5.2 Hidden Markov Models………………………………………………………………….. 11
2.2.5.3 Clustering………………………………………………………………………………………. 12
2.2.5.4 Neural Networks…………………………………………………………………………….. 13
2.2.5.5 Histogram Analysis ………………………………………………………………………… 14
2.3 Visualisation……………………………………………………………………………………………… 15
2.3.1 Virtual Reality Modelling Language (VRML) …………………………………….. 15
2.3.2 Avatars ……………………………………………………………………………………………… 15
2.3.3 Avatar Animation………………………………………………………………………………. 15
2.3.4 Application of Avatars in VRML ……………………………………………………….. 16
2.2.4 Summary…………………………………………………………………………………………… 16
3 Design…………………………………………………………………………………………………………………………. 17
3.1 Overview ………………………………………………………………………………………………….. 17
3.2 Module 1 – Gesture Recognition………………………………………………………………… 17
3.2.1 Overview …………………………………………………………………………………………… 17
3.2.2 Gesture Capture Method and System Data (Hardware)………………………. 18
3.2.3 Segmentation……………………………………………………………………………………… 18
3.2.3.1 Background Subtraction…………………………………………………………………. 18
Gesture Recognition to Control a Virtual Avatar
iv
3.2.3.1 Motion History Image (MHI) or Motion Energy Image (MEI)?……….. 18
3.2.3.2 Creating a Motion History Image (MHI)…………………………………………. 19
3.2.3.3 Windowing a MHI………………………………………………………………………….. 20
3.2.3.4 Detecting start and end points of motion (gesture) …………………………… 20
3.2.3.5 Calculating gesture length ………………………………………………………………. 21
3.2.3.6 Setting window length…………………………………………………………………….. 23
3.2.3.7 Calculating decay rate ……………………………………………………………………. 23
3.2.3.8 Segmentation Review ……………………………………………………………………… 23
3.2.4 Phase A – Creating a Basic Model………………………………………………………. 24
3.2.4.1 Introduction …………………………………………………………………………………… 24
3.2.4.2 Feature Extraction …………………………………………………………………………. 24
3.2.4.3 Training…………………………………………………………………………………………. 25
3.2.4.4 Classification………………………………………………………………………………….. 25
3.2.4.5 Performance…………………………………………………………………………………… 25
3.2.4.6 Highlighted Problems …………………………………………………………………….. 25
3.2.5 Phase B – Extending the Basic Model …………………………………………………. 26
3.2.5.1 Introduction …………………………………………………………………………………… 26
3.2.5.2 Feature Extraction …………………………………………………………………………. 26
3.2.5.3 Training…………………………………………………………………………………………. 27
3.2.5.4 Classification………………………………………………………………………………….. 30
3.2.5.5 Performance…………………………………………………………………………………… 30
3.2.5.6 Highlighted Problems …………………………………………………………………….. 31
3.2.6 Phase C – Improving Training……………………………………………………………. 31
3.2.6.1 Introduction …………………………………………………………………………………… 31
3.2.6.2 Feature Extraction …………………………………………………………………………. 31
3.2.6.3 Training…………………………………………………………………………………………. 31
3.2.6.4 Classification………………………………………………………………………………….. 32
3.2.6.5 Performance…………………………………………………………………………………… 32
3.2.6.6 Highlighted Problems …………………………………………………………………….. 33
3.2.7 Phase D – Improving Gesture Contrast ………………………………………………. 34
3.2.7.1 Introduction …………………………………………………………………………………… 34
3.2.7.2 Feature Extraction …………………………………………………………………………. 34
3.2.7.3 Training…………………………………………………………………………………………. 34
3.2.7.4 Classification………………………………………………………………………………….. 35
3.2.7.5 Performance…………………………………………………………………………………… 36
3.2.7.6 Highlighted Problems …………………………………………………………………….. 36
3.3 Module 2 – Socket Communication……………………………………………………………. 37
3.3.1 Overview …………………………………………………………………………………………… 37
3.3.2 Client-side …………………………………………………………………………………………. 38
3.3.3 Server-side…………………………………………………………………………………………. 38
3.3.4 Performance………………………………………………………………………………………. 38
3.4 Module 3 – Avatar Manipulation ………………………………………………………………. 38
3.4.1 VRML Avatar……………………………………………………………………………………. 38
3.4.2 Creating movement……………………………………………………………………………. 38
3.4.2.1 Crane Gesture………………………………………………………………………………… 39
3.4.2.2 Wave Gesture…………………………………………………………………………………. 39
3.4.2.3 Integrating Java and VRML…………………………………………………………… 39
4 Implementation…………………………………………………………………………………………………………… 40
4.1 Introduction ……………………………………………………………………………………………… 40
4.2 Gesture Recognition………………………………………………………………………………….. 40
Gesture Recognition to Control a Virtual Avatar
v
4.2.1 Segmentation……………………………………………………………………………………… 40
4.2.2 Feature Extraction …………………………………………………………………………….. 40
4.2.3 Training…………………………………………………………………………………………….. 41
4.2.4 Classification……………………………………………………………………………………… 41
4.2.5 Performance………………………………………………………………………………………. 42
4.2.6 Integrating the Client…………………………………………………………………………. 42
4.3 Avatar Manipulation…………………………………………………………………………………. 42
4.3.1 VRML Avatar Manipulation ……………………………………………………………… 42
4.3.2 Integration Problems …………………………………………………………………………. 42
4.3.3 Proposed Solution………………………………………………………………………………. 43
4.4 Implementation Summary…………………………………………………………………………. 43
5 Evaluation…………………………………………………………………………………………………………………… 44
5.1 Criteria …………………………………………………………………………………………………….. 44
5.2 Evaluation results……………………………………………………………………………………… 44
5.2.1 Gesture Recognition System……………………………………………………………….. 44
5.2.2 Avatar Manipulation………………………………………………………………………….. 47
5.3 Were the minimum requirements met? ……………………………………………………… 47
5.4 Were any of the enhancements met?………………………………………………………….. 48
5.5 Conclusion and Further Work…………………………………………………………………… 48
References ………………………………………………………………………………………………………………………….. 51
Appendix A – Personal Reflection ……………………………………………………………………………………….. 57
Appendix B – Original Project Schedule Gantt Chart ………………………………………………………….. 58
Appendix C – Revised Project Schedule Gantt Chart …………………………………………………………… 59
Appendix D – Skeletal Description for H-Anim 1.1 ………………………………………………………………. 60
Appendix E – Gesture Length Results …………………………………………………………………………………. 61
Appendix F – Problems and Limitations of Vision Based ……………………………………………………… 62
Appendix G – Internet Explorer Error Message…………………………………………………………………… 63
Appendix H – Gesture Recognition System GUI (Feedback)…………………………………………………. 64
Appendix I – VRML Gesture Exampl

GET COMPLETE MATERIAL