Academic
Publications
Explanatory Debugging: Supporting End-User Debugging of Machine-Learned Programs

Explanatory Debugging: Supporting End-User Debugging of Machine-Learned Programs,10.1109/VLHCC.2010.15,Todd Kulesza,Simone Stumpf,Margaret M. Burnett,

Explanatory Debugging: Supporting End-User Debugging of Machine-Learned Programs   (Citations: 3)
BibTex | RIS | RefWorks Download
Many machine-learning algorithms learn rules of behavior from individual end users, such as task-oriented desktop organizers and handwriting recognizers. These rules form a “program” that tells the computer what to do when future inputs arrive. Little research has explored how an end user can debug these programs when they make mistakes. We present our progress toward enabling end users to debug these learned programs via a Natural Programming methodology. We began with a formative study exploring how users reason about and correct a text-classification program. From the results, we derived and prototyped a concept based on “explanatory debugging”, then empirically evaluated it. Our results contribute methods for exposing a learned program's logic to end users and for eliciting user corrections to improve the program's predictions.
Cumulative Annual
View Publication
The following links allow you to view full publications. These links are maintained by other sources not affiliated with Microsoft Academic Search.
    • ...For example, rather than labeling entire documents, an end user could point out which words (features) in the document are most indicative of certain class labels, such as in our formative research’s user interface shown in Figure 1 [8], which allowed HCI researchers to point out words that were predictive of that transcript segment’s label...

    Weng-Keen Wonget al. End-user feature labeling: a locally-weighted regression approach

    • ...The Natural Programming methodology [17] ‐ which investigates users’ existing approaches to complete a task and ways of organising information by observing without influencing them about how the task should be done ‐ has been used to design programming languages and systems, including interfaces that adapt themselves to user preferences [10]...

    Simone Stumpfet al. When users generate music playlists: When words leave off, music begin...

    • ...Other kinds of emerging assistants are moving toward helping with research itself, such as qualitatively “coding” (categorizing) natural language text [18]...
    • ...As a basis for creating explanations, researchers have also investigated the types of information users want before assessing the trustworthiness of an intelligent agent [9, 18]...
    • ...Recent work by Lim and Dey has resulted in a toolkit for applications to generate explanations for popular machine learning systems [21], and a few systems add debugging capabilities to explanations [17, 18]...
    • ...This is a labor-intensive activity requiring days to weeks of time—but what if an assistant could do part of this work (e.g., [18])? For example, suppose ethnographer Adam has an intelligent assistant that learns to code the way Adam does; the assistant could then finish coding Adam’s transcripts...
    • ...Thus, rather than attempting to replace the interactive debugging approaches emerging for intelligent assistants (e.g., [17, 18, 22, 30]), WYSIWYT/ML’s bug-finding complements them...

    Todd Kuleszaet al. Where Are My Intelligent Assistant’s Mistakes? A Systematic Testing Ap...

Sort by: