Academic
Publications
Prosodic features which cue back-channel responses in English and Japanese

Prosodic features which cue back-channel responses in English and Japanese,10.1016/S0378-2166(99)00109-5,Journal of Pragmatics,Nigel Ward,Wataru Tsuka

Prosodic features which cue back-channel responses in English and Japanese   (Citations: 111)
BibTex | RIS | RefWorks Download
Back-channel feedback, responses such as uh-uh from a listener, is a pervasive feature of conversation. It has long been thought that the production of back-channel feedback depends to a large extent on the actions of the other conversation partner, not just on the volition of the one who produces them. In particular, prosodic cues from the speaker have long been thought to play a role, but have so far eluded identification. We have earlier suggested that an important prosodic cue involved, in both English and Japanese, is a region of low pitch late in an utterance (Ward, 1996). This paper discusses issues in the definition of back-channel feedback, presents evidence for our claim, surveys other factors which elicit or inhibit back-channel responses, and mentions a few related phenomena and theoretical issues.
Journal: Journal of Pragmatics - J PRAGMATICS , vol. 32, no. 8, pp. 1177-1207, 2000
Cumulative Annual
View Publication
The following links allow you to view full publications. These links are maintained by other sources not affiliated with Microsoft Academic Search.
    • ...Ward and Tsukahara (2000) describe, in both Japanese and American English, a region of low pitch lasting at least 110 msec which may function as a prosodic cue inviting the realization of a backchannel response from the interlocutor.,Recent advances in that research topic (Ward and Tsukahara 2000; Cathcart, Carletta, and Klein 2003; Gravano and Hirschberg 2009a) have encouraged research on ways to equip systems with the ability to signal to the user that the system is still listening (Maatman, Gratch, and Marsella 2005; Bevacqua, Mancini, and Pelachaud 2008; Morency, de Kok, and Gratch 2008)— for example, when the user is asked to enter large amounts of information...

    Agustín Gravanoet al. Affirmative Cue Words in Task-Oriented Dialogue

    • ...Verbal acknowledgers (short vocalizations such as mm-hmm and uh-huh that are uttered by a listener during a conversation) account for approximately 20% of the number of utterances in a conversation (Jurafsky et al, 1997) and appear on average four times a minute (Ward & Tsukahara, 2000)...

    Megan B. Battleset al. The impact of conversational acknowledgers on perceptions of psychothe...

    • ...Contained within this state is the backchannel routine, which generates short utterances while the user is speaking [11]...

    Jing Guang Hanet al. Collecting multi-modal data of human-robot interaction

    • ...Ward and Tsukahara (2000) describe, in both Japanese and American English, a region of low pitch lasting at least 110 msec which may function as a prosodic cue inviting the realization of a backchannel response from the interlocutor...

    Agustín Gravanoet al. Affirmative Cue Words in Task-Oriented Dialogue

    • ...These include recognizing and generating backchannel or jump-in points [39] turn-taking and oor control signals, postural mimicry [14] and emotional feedback [19,1]...
    • ...Ward and Tsukahara [39] propose a unimodal approach where backchannels are associated with a region of low pitch lasting 110ms during speech...
    • ...We included a delay parameter in our dictionary since listener backchannels can sometime happen later after speaker features (e.g., Ward and Tsukahara [39])...
    • ...The same observation was made by Ward and Tsukahara [39]...
    • ...The following prosodic features were used (based on [39]):...
    • ...Algorithm 1 Rule Based Approach of Ward and Tsukahara [39]...
    • ...Tsukahara [39] since this method has been employed eectively in virtual human systems and demonstrates clear subjective and behavioral improvements for human/virtual human interaction [14]...
    • ...We also compared our model with a \random" backchannel generator as dened in [39]: randomly generate a backchannel cue every time conditions P3, P4 and P5 are true (see Algorithm 1). The frequency of the random predictions was 12...
    • ...Table 1 Comparison of our prediction model with previously published rule-based system of Ward and Tsukahara [39]...
    • ...A second result from this table is that the performance of the rule-based and random approaches on our multimodal dataset are similar to the results previously published by Ward and Tsukahara [39] on a unimodal dataset predicting audio back-channel feedback...
    • ...Our prediction model outperforms both the random approach and the rule-based approach of Ward and Tsukahara [39]...

    Louis-Philippe Morencyet al. A probabilistic multimodal approach for predicting listener backchanne...

Sort by: