Comparing high dimensional word embeddings trained on medical text to bag-of-words for predicting medical codes
Files
Accepted version, 217.5Kb
Citation
Export citationYogarajan, V., Gouk, H., Smith, T. C., Mayo, M., & Pfahringer, B. (2020). Comparing high dimensional word embeddings trained on medical text to bag-of-words for predicting medical codes. In P. Sitek, M. Petranik, M. Krótkiewicz, & C. Srinilta (Eds.), Proceedings of 12th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2020) LNCS 12033 (pp. 97–108). Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-41964-6_9
Permanent Research Commons link: https://hdl.handle.net/10289/13591
Abstract
Word embeddings are a useful tool for extracting knowledge from the free-form text contained in electronic health records, but it has become commonplace to train such word embeddings on data that do not accurately reflect how language is used in a healthcare context. We use prediction of medical codes as an example application to compare the accuracy of word embeddings trained on health corpora to those trained on more general collections of text. It is shown that both an increase in embedding dimensionality and an increase in the volume of health-related training data improves prediction accuracy. We also present a comparison to the traditional bag-of-words feature representation, demonstrating that in many cases, this conceptually simple method for representing text results in superior accuracy to that of word embeddings.
Date
2020Publisher
Springer
Rights
© Springer Nature Switzerland AG .This is the author's accepted version. The final publication is available at Springer via dx.doi.org/10.1007/978-3-030-41964-6_9