Academic
Publications
Acoustic and visual signal based context awareness system for mobile application

Acoustic and visual signal based context awareness system for mobile application,10.1109/ICCE.2011.5722777,Woo-Hyun Choi,Seung-Il Kim,Min-Seok Keum,Wa

Acoustic and visual signal based context awareness system for mobile application   (Citations: 1)
BibTex | RIS | RefWorks Download
In this paper, an acoustic and visual signal based context awareness system is proposed for a mobile application. The proposed system senses and determines, in real time, user contextual information, such as where the user is or what the user does, by processing signals from the microphone and the camera embedded in the mobile device. An initial implementation of the algorithms into a smart phone demonstrated effectiveness of the proposed system. I. INTRODUCTION Mobile devices such as smart-phones play an important role in our daily lives. Not only do they work as a telephone, their role have now expanded to taking pictures, texting/receiving messages, playing music/videos, keeping appointments, helping us navigate, to name a few. Nevertheless, significant improvements in intelligence, capabilities, and conveniences of these devices are yet expected in the future. One key area of the expected future capabilities of the smart devices is the Context Awareness (CA). CA capable system can sense and recognize user context such as user activities, surrounding environment, and provide context relevant information and services to users accordingly. In earlier CA works (1)-(3), only acoustic signals were considered for determining the context. Though these studies reported good CA performances, there are obviously limitations in extent of the context the acoustics only CA can perceive and recognize. Thus, a more general CA employing both the acoustic and visual signals is proposed, and findings from an initial implementation in a smart-phone is presented.
Published in 2011.
Cumulative Annual
View Publication
The following links allow you to view full publications. These links are maintained by other sources not affiliated with Microsoft Academic Search.
    • ...In order to implement context awareness using acoustic information, judicious feature extraction to better reflect the attributes of the context for event recognition becomes important.[1]-[13] In this paper, MFCC feature is employed which is used not only in speech recognition but also applied to multiple environmental sound recognition research activities[1]-[10]...
    • ...Then we added vehicle noise[13] to the acoustics in order to simulate the vehicle environment...

    Kwangyoun Kimet al. Discriminative Training of GMM via Log-Likelihood Ratio for Abnormal A...

Sort by: