Creator:
Date:
Abstract:
Recent mobile and automated audiometry technologies enable non-experts to deliver hearing tests. The problem remains that a large number of such users are not trained to interpret audiograms. In this work, we outline the development of an intelligent audiogram classification system. More specifically, we present how a training dataset was collected, the development of the classification system relying on supervised learning, as well as other tools designed for the analysis of audiograms in large databases. Using the Rapid Audiogram Annotation Environment developed in this study, we collected hundreds of audiogram annotations from three licensed audiologists. Our analysis demonstrates that inter-rater reliability is substantial or better for classification of hearing loss configuration, symmetry, and severity. Our classification system achieves a performance comparable to the state of the art, but is more flexible. Finally, we demonstrate qualitatively that Gaussian mixture models are useful for the detection of potential reliability issues in audiograms.