Academic
Publications
Conversation Scene Analysis with Dynamic Bayesian Network Basedon Visual Head Tracking

Conversation Scene Analysis with Dynamic Bayesian Network Basedon Visual Head Tracking,10.1109/ICME.2006.262677,Kazuhiro Otsuka,Junji Yamato,Yoshinao

Conversation Scene Analysis with Dynamic Bayesian Network Basedon Visual Head Tracking   (Citations: 8)
BibTex | RIS | RefWorks Download
ABSTRACT A novel method,based on a probabilistic model for con- versation scene analysis is proposed that can infer conversa- tion structure from video sequences of face-to-face commu- nication. Conversation structure represents the type of con- versation such as monologue or dialogue, and can indicate who is talking / listening to whom. This study assumes that the gaze directions of participants provide cues for discerning the conversation structure, and can be identified from head di- rections. For measuring head directions, the proposed method newly employs a visual head tracker based on Sparse-Template Condensation. The conversation model is built on a dynamic Bayesian network and is used to estimate the conversation structure and gaze directions from observed head directions and utterances. Visual tracking is conventionally thought to be less reliable than contact sensors, but experiments confirm that the proposed method achieves almost comparable perfor- mance,in estimating gaze directions and conversation struc- ture to a conventional sensor-based method.
Cumulative Annual
View Publication
The following links allow you to view full publications. These links are maintained by other sources not affiliated with Microsoft Academic Search.
Sort by: