Named entity recognition (NER) is the task of tagging entities in text with their corresponding type. Approaches typically use BIO notation, which differentiates the beginning (B) and the inside (I) of entities. O is used for non-entity tokens.
Example:
Mark | Watney | visited | Mars |
---|---|---|---|
B-PER | I-PER | O | B-LOC |
The CoNLL 2003 NER task consists of newswire text from the Reuters RCV1 corpus tagged with four different entity types (PER, LOC, ORG, MISC). Models are evaluated based on span-based F1 on the test set. ♦ used both the train and development splits for training.
This is a cleaner version of the CoNLL 2003 NER task, where about 5% of instances in the test set are corrected due to mislabelling. The training set is left untouched. Models are evaluated based on span-based F1 on the test set. ♦ used both the train and development splits for training.
Links: CoNLL++ (including direct download links for data)
Model | F1 | Paper / Source | Code |
---|---|---|---|
CL-KL (Wang et al., 2021) | 94.81 | Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning | Official |
CrossWeigh + Flair (Wang et al., 2019)♦ | 94.28 | CrossWeigh: Training Named Entity Tagger from Imperfect Annotations | Official |
Flair embeddings (Akbik et al., 2018)♦ | 93.89 | Contextual String Embeddings for Sequence Labeling | Flair framework |
BiLSTM-CRF+ELMo (Peters et al., 2018) | 93.42 | Deep contextualized word representations | AllenNLP Project AllenNLP GitHub |
Ma and Hovy (2016) | 91.87 | End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF | |
LSTM-CRF (Lample et al., 2016) | 91.47 | Neural Architectures for Named Entity Recognition |
The WNUT 2017 Emerging Entities task operates over a wide range of English text and focuses on generalisation beyond memorisation in high-variance environments. Scores are given both over entity chunk instances, and unique entity surface forms, to normalise the biasing impact of entities that occur frequently.
Feature | Train | Dev | Test |
---|---|---|---|
Posts | 3,395 | 1,009 | 1,287 |
Tokens | 62,729 | 15,733 | 23,394 |
NE tokens | 3,160 | 1,250 | 1,589 |
The data is annotated for six classes - person, location, group, creative work, product and corporation.
Links: WNUT 2017 Emerging Entity task page (including direct download links for data and scoring script)
Model | F1 | F1 (surface form) | Paper / Source |
---|---|---|---|
InferNER (Moemmur et al., 2021) | 50.52 | --- | InferNER: an attentive model leveraging the sentence-level information for Named Entity Recognition in Microblogs |
CrossWeigh + Flair (Wang et al., 2019) | 50.03 | CrossWeigh: Training Named Entity Tagger from Imperfect Annotations | Official |
Flair embeddings (Akbik et al., 2018) | 49.59 | Pooled Contextualized Embeddings for Named Entity Recognition / Flair framework | |
Aguilar et al. (2018) | 45.55 | Modeling Noisiness to Recognize Named Entities using Multitask Neural Networks on Social Media | |
SpinningBytes | 40.78 | 39.33 | Transfer Learning and Sentence Level Features for Named Entity Recognition on Tweets |
The Ontonotes corpus v5 is a richly annotated corpus with several layers of annotation, including named entities, coreference, part of speech, word sense, propositions, and syntactic parse trees. These annotations are over a large number of tokens, a broad cross-section of domains, and 3 languages (English, Arabic, and Chinese). The NER dataset (of interest here) includes 18 tags, consisting of 11 types (PERSON, ORGANIZATION, etc) and 7 values (DATE, PERCENT, etc), and contains 2 million tokens. The common datasplit used in NER is defined in Pradhan et al 2013 and can be found here.
Few-NERD is a large-scale, fine-grained manually annotated named entity recognition dataset, which contains 8 coarse-grained types, 66 fine-grained types, 188,200 sentences, 491,711 entities and 4,601,223 tokens. Three benchmark tasks are built:
- Few-NERD (SUP) is a standard NER task;
- Few-NERD (INTRA) is a few-shot NER task across different coarse-grained types;
- Few-NERD (INTER) is a few-shot NER task within coarse-grained types.
Website: Few-NERD page;
Download & code: https://github.com/thunlp/Few-NERD
Model | F1 | Paper / Source | Code |
---|---|---|---|
BERT-Tagger (Ding et al., 2021) | 68.88 | Few-NERD: A Few-shot Named Entity Recognition Dataset | Official |