513 A Speech-to-Chart Prototype for Automatically Charting Hard Tissue Exams

Thursday, March 4, 2010: 3:30 p.m. - 4:45 p.m.
Location: Exhibit Hall D (Walter E. Washington Convention Center)
Presentation Type: Poster Session
J. IRWIN1, L. CHRISTENSEN1, H. HARKEMA1, T. SCHLEYER1, H. SPALLEK1, P. HAUG2, B. CHAPMAN1, and W. CHAPMAN1, 1University of Pittsburgh, Pittsburgh, PA, 2University of Utah, Biomedical Informatics, Salt Lake City, UT
Typically, when using clinical practice management systems (PMS), dentists perform data entry by using an assistant as a transcriptionist. Speech recognition is one method that could free up the assistant and allow the dentist to enter data directly without any infection control concerns. Existing speech interfaces in PMSs, however, are complex and cumbersome to use. Therefore, there is a critical need for a usable natural language interface for clinical data entry. Objective: to develop and evaluate a speech-to-chart prototype for charting spoken dental exams. Methods: We developed and evaluated a speech-to-chart prototype which contained the following components: speech recognizer to transcribe digital audio to text; post-processor to correct common transcription errors; NLP application to extract concepts for charting; and graphical chart generator. We evaluated the percent word accuracy of the speech recognizer and the post-processor. We then performed a summative evaluation on the entire system. Our prototype charted 12 hard tissue exams. We calculated its accuracy by comparing the charted exams to reference standard exams charted by two dentists. Results: Our speech-to-chart prototype supports natural dictation without the need for structured commands. The average time to chart a single hard tissue finding was 7.3 seconds. The system performed with an average 80% accuracy on manually transcribed exams, 48% accuracy with automatically transcribed exams, and 54% on automatically transcribed exams corrected with the post-processor. Correctly identifying surfaces was one of the largest errors of our system. Conclusion: We successfully created a speech-to-chart prototype that charts naturally spoken exams. Future work on the prototype includes enhancing speech recognition, improving the NLP system with more training and a better discourse processor, and improving overall usability. This research is supported by NIDCR/NLM grants: NLM 5T15 LM00705921 & 5R21DE018158-02.

Keywords: Computers, Dental Informatics, Evaluation, Interfaces and Technology