摘 要: 针对医疗问答系统在处理复杂问题时面临上下文语义理解的局限,提出一种基于BERT-BiGRU的模型。通过预训练语言模型BERT和双向门控循环单元BiGRU建立医疗问答系统,其中BERT提取文本语义特征,BiGRU学习文本的顺序依赖信息,进而全面表示文本语义结构信息。在CBLUE医疗问答数据集上与基准方法相比,该模型在意图识别任务上的精确率提高到79.22%,召回率提高到81.23%,F1值(精确率和召回率的调和平均值)提高到79.82%。研究表明,结合BERT和BiGRU的模型可以更好地理解医疗问句的语义和结构信息,显著地提升了医疗问答系统的性能。 |
关键词: BERT;BiGRU;知识图谱;意图识别;医疗问答系统 |
中图分类号: TP391.1
文献标识码: A
|
基金项目: 2021年度广东省普通高校重点领域专项(新一代信息技术)项目“基于大数据的公共交通决策支持技术研究”(2021ZDZX3020) |
|
An Intelligent Medical Q&A System Based on BERT-BiGRU Model |
HUANG Yong, XI Juanxia, GUAN Chengbin
|
(School of Inf ormation Management and Engineering, Guangdong Neusof t University, Foshan 528000, China)
1427217679@qq.com; xijuanxia@nuit.edu.cn; guanchengbin@nuit.edu.cn
|
Abstract: This paper proposes a model based on BERT-BiGRU to address the limitations of contextual semantic understanding in medical question answering (Q&A) systems when dealing with complex problems. A pre-trained language model BERT and a Bidirectional Gated Recurrent Unit (BiGRU)are used to establish a medical Q&A system, where BERT extracts semantic features of the text, while BiGRU learns the order dependency information of the text, thereby comprehensively representing the semantic structure information of the text. Compared with the benchmark method on the CBLUE medical Q&A dataset, the proposed model achieves an accuracy improvement of 79.22% , a recall improvement of 81.23% , and an F1 value (harmonic mean of accuracy and recall) improvement of 79.82% in intention recognition tasks. Research has shown that combining BERT and BiGRU models can better understand the semantic and structural information of medical questions, significantly improving the performance of medical Q&A systems. |
Keywords: BERT; BiGRU; knowledge mapping; intention recognition; medical Q&A system |