1. 首页
  2. 人工智能
  3. 论文/代码
  4. FLERT: Document-Level Features for Named Entity Recognition

FLERT: Document-Level Features for Named Entity Recognition

上传者: 2021-01-24 03:54:17上传 .PDF文件 364.64 KB 热度 23次

FLERT: Document-Level Features for Named Entity Recognition

Current state-of-the-art approaches for named entity recognition (NER) using BERT-style transformers typically use one of two different approaches: (1) The first fine-tunes the transformer itself on the NER task and adds only a simple linear layer for word-level predictions. (2) The second uses the transformer only to provide features to a standard LSTM-CRF sequence labeling architecture and thus performs no fine-tuning.In this paper, we perform a comparative analysis of both approaches in a variety of settings currently considered in the literature. In particular, we evaluate how well they work when document-level features are leveraged. Our evaluation on the classic CoNLL benchmark datasets for 4 languages shows that document-level features significantly improve NER quality and that fine-tuning generally outperforms the feature-based approaches. We present recommendations for parameters as well as several new state-of-the-art numbers. Our approach is integrated into the Flair framework to facilitate reproduction of our experiments.

FLERT:命名实体识别的文档级功能

当前使用BERT样式转换器进行命名实体识别(NER)的最新方法通常使用两种不同方法之一:(1)第一种方法是在NER任务上微调转换器本身,仅添加简单的线性方法字级预测的图层。(2)第二种方法仅使用转换器为标准LSTM-CRF序列标记体系结构提供功能,因此不进行微调。.. 在本文中,我们对文献中当前考虑的各种环境下的两种方法进行了比较分析。特别是,我们评估了在利用文档级功能时它们的工作情况。我们对4种语言的经典CoNLL基准数据集的评估表明,文档级功能可显着提高NER质量,并且微调通常优于基于功能的方法。我们提出了有关参数的建议以及一些新的最新数字。我们的方法已集成到Flair框架中,以促进实验的复制。 (阅读更多)

用户评论