1. 首页
  2. 人工智能
  3. 论文/代码
  4. Biomedical relation extraction with pre-trained language representations and min

Biomedical relation extraction with pre-trained language representations and min

上传者: 2021-01-24 04:52:32上传 .PDF文件 96.61 KB 热度 11次

Biomedical relation extraction with pre-trained language representations and minimal task-specific architecture

This paper presents our participation in the AGAC Track from the 2019 BioNLP Open Shared Tasks. We provide a solution for Task 3, which aims to extract "gene - function change - disease" triples, where "gene" and "disease" are mentions of particular genes and diseases respectively and "function change" is one of four pre-defined relationship types.Our system extends BERT (Devlin et al., 2018), a state-of-the-art language model, which learns contextual language representations from a large unlabelled corpus and whose parameters can be fine-tuned to solve specific tasks with minimal additional architecture. We encode the pair of mentions and their textual context as two consecutive sequences in BERT, separated by a special symbol. We then use a single linear layer to classify their relationship into five classes (four pre-defined, as well as 'no relation'). Despite considerable class imbalance, our system significantly outperforms a random baseline while relying on an extremely simple setup with no specially engineered features.

具有预训练语言表示和最小任务特定架构的生物医学关系提取

本文介绍了我们在2019 BioNLP开放共享任务中对AGAC的参与。我们为任务3提供了一种解决方案,其目的是提取“基因-功能改变-疾病”三元组,其中“基因”和“疾病”分别是特定基因和疾病的提及,而“功能改变”是四个预定义的之一关系类型。.. 我们的系统扩展了BERT(Devlin等人,2018),这是一种先进的语言模型,它可以从大型未标记的语料库中学习上下文语言表示形式,并且可以对其参数进行微调,从而以最少的附加架构解决特定任务。我们将一对提及及其文本上下文编码为BERT中两个连续的序列,并用特殊符号分隔。然后,我们使用单个线性层将它们的关系分为五类(四个预定义的以及“无关系”)。尽管存在严重的班级失衡,但我们的系统仍显着优于随机基准,同时依靠极其简单的设置而没有经过特殊设计的功能。 (阅读更多)

用户评论