Změny mezi verzí 5 a verzí 6 u NerDataset


Ignorovat:
Časová značka:
28. 11. 2022 16:35:00 (před 20 měsíci)
Autor:
xnovot32@fi.muni.cz
Komentář:

--

Vysvětlivky:

Nezměněno
Přidáno
Odstraněno
Změněno
  • NerDataset

    v5 v6  
    55
    66== Contents ==
    7 The dataset is stored in archive [https://nlp.fi.muni.cz/projects/ahisto/ner-dataset.zip ner-dataset.zip] (1.7 GB) with following structure:
     7The dataset is stored in archive [https://nlp.fi.muni.cz/projects/ahisto/ner-dataset.zip ner-dataset.zip] (1.7 GB) with following structure:
    88
    9  * 8 files named `dataset_mlm_*.txt` that contain sentences for unsupervised training of language models.[[BR]]We used the following three variables to produce the different files:
     9 * 8 files named `dataset_mlm_*.txt` that contain sentences for unsupervised training of language models.[[BR]]We used the following three variables to produce the different files:
    1010   1. The sentences are extracted from book OCR texts and may therefore span several pages.[[BR]]However, page boundaries contain pollutants such as running heads, footnotes, and page numbers.[[BR]]We either allow the sentences in the file to cross page boundaries (`all`) or not (`non-crossing`).
    1111   1. The sentences come from all book pages (`all`) or just those considered relevant by expert annotators (`only-relevant`).
    1212   1. We split the sentences roughly into 90% for training (`training`) and 10% for validation (`validation`).
    13  * 16 tuples of files named `dataset_ner_*.sentences.txt`, `.ner_tags.txt`, and in two cases also `.docx`.[[BR]]These contain sentences and NER tags for supervised training, validation, and testing of language models.[[BR]]The `.docx` files are authored by human annotators and may contain extra details missing from files `.sentences.txt` and `.ner_tags.txt`.[[BR]]Here are the five variables that we used to produce the different files:
    14    1. The sentences may originate from book OCR texts using information retrieval techniques (`fuzzy-regex` or `manatee`).[[BR]]The sentences may also originate from regests (`regests`).[[BR]]Furthermore, the sentences may originate both from book OCR texts and regests (`fuzzy-regex+regests` and `fuzzy-regex+manatee`).
     13 * 16 tuples of files named `dataset_ner_*.sentences.txt`, `.ner_tags.txt`, and in two cases also `.docx`.[[BR]]These contain sentences and NER tags for supervised training, validation, and testing of language models.[[BR]]The `.docx` files are authored by human annotators and may contain extra details missing from files `.sentences.txt` and `.ner_tags.txt`.[[BR]]Here are the five variables that we used to produce the different files:
     14   1. The sentences may originate from book OCR texts using information retrieval techniques (`fuzzy-regex` or `manatee`).[[BR]]The sentences may also originate from regests (`regests`) or both books and regests (`fuzzy-regex+regests` and `fuzzy-regex+manatee`).
    1515   1. When sentences originate from book OCR texts, they may span several pages of a book.[[BR]]However, page boundaries contain pollutants such as running heads, footnotes, and page numbers.[[BR]]We either allow the sentences in the file to cross page boundaries (`all`) or not (`non-crossing`).
    1616   1. When sentences originate from book OCR texts, they may come from book pages of different relevance.[[BR]]We either use sentences from all book pages (`all`) or just those considered relevant by expert annotators (`only-relevant`).
    1717   1. When sentences and NER tags originate from book OCR texts using information retrieval techniques, many entities in the sentences may lack tags.[[BR]]Therefore, we also provide NER tags completed by language models (`automatically_tagged`) and human annotators (`tagged`).
    18    1. We split the sentences roughly into 80% for training (`training`), 10% for validation (`validation`), and 10% for testing (`testing`).[[BR]]For repeated testing, we subdivide the testing split (`testing_001-400` and `testing_401-500`).
     18   1. We split the sentences roughly into 80% for training (`training`), 10% for validation (`validation`), and 10% for testing (`testing`).[[BR]]For repeated testing, we subdivide the testing split (`testing_001-400` and `testing_401-500`).
    1919
    2020== Citing ==