= A Human-Annotated Dataset for Language Modeling and Named Entity Recognition in Medieval Documents = This is an open dataset of sentences from 19th and 20th century letterpress reprints of documents from the Hussite era. The dataset contains human annotations for named entity recognition (NER). You can [https://nlp.fi.muni.cz/projects/ahisto/ner-dataset.zip download the dataset] in the LINDAT/CLARIAH-CZ repository. == Contents == The dataset is stored in archive [https://nlp.fi.muni.cz/projects/ahisto/ner-dataset.zip ner-dataset.zip] (1.7 GB) with following structure: * 8 files named `dataset_mlm_*.txt` that contain sentences for unsupervised training and validation of language models.[[BR]]We used the following three variables to produce the different files: 1. The sentences are extracted from book OCR texts and may therefore span several pages.[[BR]]However, page boundaries contain pollutants such as running heads, footnotes, and page numbers.[[BR]]We either allow the sentences to cross page boundaries (`all`) or not (`non-crossing`). 1. The sentences come from all book pages (`all`) or just those considered relevant by human annotators (`only-relevant`). 1. We split the sentences roughly into 90% for training (`training`) and 10% for validation (`validation`). * 16 tuples of files named `dataset_ner_*.sentences.txt`, `.ner_tags.txt`, and in two cases also `.docx`.[[BR]]These contain sentences and NER tags for supervised training, validation, and testing of language models.[[BR]]The `.docx` files are authored by human annotators and may contain extra details missing from files `.sentences.txt` and `.ner_tags.txt`.[[BR]]Here are the five variables that we used to produce the different files: 1. The sentences may originate from book OCR texts using information retrieval techniques (`fuzzy-regex` or `manatee`).[[BR]]The sentences may also originate from regests (`regests`) or both books and regests (`fuzzy-regex+regests` and `fuzzy-regex+manatee`). 1. When sentences originate from book OCR texts, they may span several pages of a book.[[BR]]However, page boundaries contain pollutants such as running heads, footnotes, and page numbers.[[BR]]We either allow the sentences to cross page boundaries (`all`) or not (`non-crossing`). 1. When sentences originate from book OCR texts, they may come from book pages of different relevance.[[BR]]We either use sentences from all book pages (`all`) or just those considered relevant by human annotators (`only-relevant`). 1. When sentences and NER tags originate from book OCR texts using information retrieval techniques, many entities in the sentences may lack tags.[[BR]]Therefore, we also provide NER tags completed by language models (`automatically_tagged`) and human annotators (`tagged`). 1. We split the sentences roughly into 80% for training (`training`), 10% for validation (`validation`), and 10% for testing (`testing`).[[BR]]For repeated testing, we subdivide the testing split (`testing_001-400` and `testing_401-500`). == Citing == If you use our dataset in your work, please cite the following article: TODO If you use LaTeX, you can use the following BibTeX entry: {{{ TODO }}} == Acknowledgements == This work was funded by TAČR Éta, [https://starfos.tacr.cz/en/project/TL03000365 project number TL03000365].