参考文献/References:
[1] 吴宗友, 白昆龙, 杨林蕊, 等. 电子病历文本挖掘研究综述[J]. 计算机研究与发展, 2021, 58(3): 513-527.
[2] ZHOU J N, WANG J K, LIU G S. Multiple character embeddings for Chinese word segmentation[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop. Florence: Association for Computational Linguistics, 2019: 210-216.
[3] DONG C H, ZHANG J J, ZONG C Q, et al. Character-based LSTM-CRF with radical-level features for Chinese named entity recognition[C]//Natural Language Understanding and Intelligent Applications. Springer: Cham, 2016: 1-12.
[4] MA R T, PENG M L, ZHANG Q, et al. Simplify the usage of lexicon in Chinese NER[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Cambridge: MIT Press, 2020: 5951-5960.
[5] HUANG Z, WEI X, KAI Y. Bidirectional LSTM-CRF models for sequence tagging[J]. Computer Science, 2015: 35-45.
[6] 宦娟, 李慧, 李明宝, 等. 基于交叉验证网格寻优的GBDT-LSTM水产养殖溶解氧预测[J].常州大学学报(自然科学版), 2021, 33(4): 63-71.
[7] HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997, 9(8): 1735-1780.
[8] JOHN L, ANDREW M, PEREIRA FERNANDO C N. Conditional random fields: Probabilistic models for segmenting and labeling sequence data[J]. Proceedings of the Eighteenth International Conference on Machine Learning, 2001: 282-289.
[9] XU K, ZHOU Z F, HAO T Y, et al. A bidirectional LSTM and conditional random fields approach to medical named entity recognition[M]//Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2017. Cham: Springer International Publishing, 2017: 355-365.
[10] WANG Q, ZHOU Y M, RUAN T, et al. Incorporating dictionaries into deep neural networks for the Chinese clinical named entity recognition[J]. Journal of Biomedical Informatics, 2019, 92: 103133.
[11] 林景栋, 吴欣怡, 柴毅, 等. 卷积神经网络结构优化综述[J]. 自动化学报, 2020, 46(1): 24-37.
[12] STRUBELL E, VERGA P, BELANGER D, et al. Fast andaccurate entity recognition with iterated dilated convolutions[C]//Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: Association for Computational Linguistics, 2017: 2670-2680.
[13] MIKOLOV T, SUTSKEVER I, CHEN K, et al. Distributed representations of words and phrases and their compositionality[C]// Proceedings of NIPS. Cambridge: MIT Press, 2013: 3111-3119.
[14] ZHANG X, ZHAO J B, LECUN Y. Character-level convolutional networks for text classification[J]. Advances in Neural Informational Processing Systems, 2015, 649-657.
[15] GUI T, MA R, ZHANG Q, et al. CNN-Based Chinese NER with lexicon rethinking[C]// Shanghai: IJCAI, 2019: 4982-4988.
[16] 王毅, 戴国洪, 王克胜. 深度学习技术在预测维修中的应用综述[J]. 常州大学学报(自然科学版), 2019, 31(3): 1-22.
[17] BAHDANAU D, CHO K, BENGIO Y. Neuralmachine translation by jointly learning to align and translate[J]. Computer Science, 2014: 18-23.
[18] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: Acm, 2017: 5998-6008.