Presentation Type: Poster Session
Objective: The goal of this project is to enhance a dental charting system driven by speech recognition and natural language processing that supports automatic, electronic data acquisition at the point of care. The system charts observations reported in single utterances from a dictated dental exam, but may benefit from modeling relationships across utterances (i.e. discourse). We designed a model to enable our system to 1) identify utterances containing chartable information 2) distinguish observations from treatment plans and 3) merge information that is found across multiple utterance about a single entity.
Methods: In this pilot study, we identified a limited set of dialog acts (expressions conveying the speakers intent e.g. command, statement, question and irrelevant) from the literature1 and dictated dental exams (n=7) to determine whether an utterance contains a dental condition and should be charted as an observation or a treatment plan. We marked two types of anaphora (i.e. fully interpreting one expression depends on another expression) including coreference and correction.
Results: Most utterances (207/233) are statements and commands that contain relevant dental conditions and procedures, 82.8% and 6.0%, respectively. Of the 55 anaphoric pairs, most pairs are coreferential (90.9%) rather than corrective (9.1%).
Conclusion: We learned that most utterances 1) contain chartable information 2) should be charted as observations and 3) may require coreference resolution to chart all aspects of the observation. We are annotating and evaluating a larger corpus of exams to identify new dialog acts and anaphora occurring in dental examinations. This research is supported by the NLM/NIDCR Fellowship: 5T15LM007059.
Keywords: Computers and Health services research
See more of: Education Research