wiki:NerDataset

Version 17 (modified by xnovot32@fi.muni.cz, 20 měsíci ago) (diff)

--

A Human-Annotated Dataset for Language Modeling and Named Entity Recognition in Medieval Documents

This is an open dataset of sentences from 19th and 20th century letterpress reprints of documents from the Hussite era.
The dataset contains a corpus for language modeling and human annotations for named entity recognition (NER).

You can download the dataset in the LINDAT/CLARIAH-CZ repository.

Contents

The dataset is structured as follows:

  • The archive language-modeling-corpus.zip (633.79 MB) contains 8 files with sentences for unsupervised training and validation of language models.
    We used the following three variables to produce the different files:
    1. The sentences are extracted from book OCR texts and may therefore span several pages.
      However, page boundaries contain pollutants such as running heads, footnotes, and page numbers.
      We either allow the sentences to cross page boundaries (all) or not (non-crossing).
    2. The sentences come from all book pages (all) or just those considered relevant by human annotators (only-relevant).
    3. We split the sentences roughly into 90% for training (training) and 10% for validation (validation).
  • The archive named-entity-recognition-annotations.zip (978.29 MB) contains 16 tuples of files named *.sentences.txt, .ner_tags.txt, and in one case also .docx.1
    These files contain sentences and NER tags for supervised training, validation, and testing of language models.
    Here are the five variables that we used to produce the different files:
    1. The sentences may originate from book OCR texts using information retrieval techniques (fuzzy-regex or manatee).
      The sentences may also originate from regests (regests) or both books and regests (fuzzy-regex+regests and fuzzy-regex+manatee).
    2. When sentences originate from book OCR texts, they may span several pages of a book.
      However, page boundaries contain pollutants such as running heads, footnotes, and page numbers.
      We either allow the sentences to cross page boundaries (all) or not (non-crossing).
    3. When sentences originate from book OCR texts, they may come from book pages of different relevance.
      We either use sentences from all book pages (all) or just those considered relevant by human annotators (only-relevant).
    4. When sentences and NER tags originate from book OCR texts using information retrieval techniques, many entities in the sentences may lack tags.
      Therefore, we also provide NER tags completed by language models (automatically_tagged) and human annotators (tagged).
    5. We split the sentences roughly into 80% for training (training), 10% for validation (validation), and 10% for testing (testing).
      For repeated testing, we subdivide the testing split (testing_001-400 and testing_401-500).

1 The .docx files were authored by human annotators and contain extra details missing from files .sentences.txt and .ner_tags.txt. The extra details include nested entities such as locations in person names (e.g. “Blažek z Kralup”) and people in location names (e.g. “Kostel sv. Martina”).

Citing

If you use our dataset in your work, please cite the following article:

TODO

If you use LaTeX, you can use the following BibTeX entry:

TODO

Acknowledgements

This work was funded by TAČR Éta, project number TL03000365.