模型:
flair/ner-english-ontonotes-large
这是用于英语的大型18类NER模型,随附于 Flair 。
F1-Score: 90.93(Ontonotes)
预测18个标签:
| tag | meaning | 
|---|---|
| CARDINAL | cardinal value | 
| DATE | date value | 
| EVENT | event name | 
| FAC | building name | 
| GPE | geo-political entity | 
| LANGUAGE | language name | 
| LAW | law name | 
| LOC | location name | 
| MONEY | money name | 
| NORP | affiliation | 
| ORDINAL | ordinal value | 
| ORG | organization name | 
| PERCENT | percent value | 
| PERSON | person name | 
| PRODUCT | product name | 
| QUANTITY | quantity value | 
| TIME | time value | 
| WORK_OF_ART | name of work of art | 
基于文档级别的XLM-R嵌入和 FLERT 。
需求: Flair (pip install flair)
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-english-ontonotes-large")
# make example sentence
sentence = Sentence("On September 1st George won 1 dollar while watching Game of Thrones.")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
    print(entity)
 这将产生以下输出:
Span [2,3]: "September 1st" [− Labels: DATE (1.0)] Span [4]: "George" [− Labels: PERSON (1.0)] Span [6,7]: "1 dollar" [− Labels: MONEY (1.0)] Span [10,11,12]: "Game of Thrones" [− Labels: WORK_OF_ART (1.0)]
因此,在句子“On September 1st George Washington won 1 dollar while watching Game of Thrones”中找到了实体“September 1st”(标记为日期),“George”(标记为人物),“1 dollar”(标记为货币)和“Game of Thrones”(标记为艺术作品)。
使用以下Flair脚本训练了该模型:
from flair.data import Corpus
from flair.datasets import ColumnCorpus
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself)
corpus: Corpus = ColumnCorpus(
                "resources/tasks/onto-ner",
                column_format={0: "text", 1: "pos", 2: "upos", 3: "ner"},
                tag_to_bioes="ner",
            )
# 2. what tag do we want to predict?
tag_type = 'ner'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize fine-tuneable transformer embeddings WITH document context
from flair.embeddings import TransformerWordEmbeddings
embeddings = TransformerWordEmbeddings(
    model='xlm-roberta-large',
    layers="-1",
    subtoken_pooling="first",
    fine_tune=True,
    use_context=True,
)
# 5. initialize bare-bones sequence tagger (no CRF, no RNN, no reprojection)
from flair.models import SequenceTagger
tagger = SequenceTagger(
    hidden_size=256,
    embeddings=embeddings,
    tag_dictionary=tag_dictionary,
    tag_type='ner',
    use_crf=False,
    use_rnn=False,
    reproject_embeddings=False,
)
# 6. initialize trainer with AdamW optimizer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW)
# 7. run training with XLM parameters (20 epochs, small LR)
from torch.optim.lr_scheduler import OneCycleLR
trainer.train('resources/taggers/ner-english-ontonotes-large',
              learning_rate=5.0e-6,
              mini_batch_size=4,
              mini_batch_chunk_size=1,
              max_epochs=20,
              scheduler=OneCycleLR,
              embeddings_storage_mode='none',
              weight_decay=0.,
              )
 在使用该模型时,请引用以下论文。
@misc{schweter2020flert,
    title={FLERT: Document-Level Features for Named Entity Recognition},
    author={Stefan Schweter and Alan Akbik},
    year={2020},
    eprint={2011.06993},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
 Flair问题跟踪器可在 here 中找到。