wiki:private/NlpInPracticeCourse/ParsingCzech

Parsing of Czech: Between Rules and Stats

IA161 NLP in Practice Course, Course Guarantee: Aleš Horák

Prepared by: Miloš Jakubíček, Aleš Horák

State of the Art

References

  1. Fernández-González, D., & Gómez-Rodríguez, C. (2023). Dependency parsing with bottom-up hierarchical pointer networks. Information Fusion, 91, 494-503.
  2. Arps, D., Samih, Y., Kallmeyer, L., & Sajjad, H. (2022). Probing for constituency structure in neural language models. arXiv preprint arXiv:2204.06201.
  3. Qi, P., Dozat, T., Zhang, Y., & Manning, C. D. (2019). Universal dependency parsing from scratch. arXiv preprint arXiv:1901.10457.
  4. Baisa, V. and Kovář, V. (2014). Information extraction for Czech based on syntactic analysis. In Vetulani, Z. and Mariani, J., editors,Human Language Technology Challenges for Computer Science and Linguistics, pages 155–165. Springer International Publishing.

Practical Session

Note: If you are new to the command line interface via a terminal window, you may find the tutorial for working in terminal useful.

We will develop/adjust the grammar of the SET parser (for English or Czech).


  1. Download the SET parser with evaluation dataset
    wget https://nlp.fi.muni.cz/trac/research/chrome/site/bigdata/ukol_ia161-parsing.zip
    
  2. Unzip the downloaded file
    unzip ukol_ia161-parsing.zip
    
  3. Go to the unziped folder
    cd ukol_ia161-parsing
    
  4. [optional] Choose the language you want to work with. The default is English (en) which can be changed to Czech (cs) via editing Makefile:
    nano Makefile
    
    if you want to work with Czech, change the first line to
    LANGUAGE=cs
    
  5. Test the prepared program that analyses 100 selected sentences
    make set_trees
    make compare
    
    The output should be
    ./compare_dep_trees.py data/trees/ud21_gum_dev data/trees/set_ud21_gum_dev
    UAS =  55.4 %
    
    You can see detailed evaluation (sentence by sentence) with
    make compare SENTENCES=1
    
    You can watch differences for one tree with
    make diff SENTENCE=academic_librarians-10
    
    The left window with ud21_gum_dev/academic_librarians-10 shows the expected ground truth, the right window of set_ud21_gum_dev/academic_librarians-10 displays the current parsing result (to be improved by you).
    Exit the diff by pressing q.
    You may inspect the tagged vertical text with
    make vert SENTENCE=academic_librarians-10
    
    You can watch the two trees with (python3-tk must be installed in the system)
    make view SENTENCE=academic_librarians-10
    
    For remote tree view (i.e. inspecting the trees on different computer), you may run
    make html SENTENCE=academic_librarians-10
    
    And point your browser to the html/index.html file.
    You can extract the text of the sentence easily with
    make text SENTENCE=academic_librarians-10
    
    English translation of the Czech sentences can be obtained via
    make texttrans SENTENCE=academic_librarians-10
    
  6. Debugging the parsing process can be done using
    make debug SENTENCE=academic_librarians-10
    
    which will print the final rules used to build the tree. Adding DETAIL=1 will show all details of the parsing process, including the unused rules.
    make debug SENTENCE=academic_librarians-10 DETAIL=1
    
  7. Look at the files (you may use mc file manager, exit it with Esc+0):
    • data/vert/pdt2_etest or ud21_gum_dev - 100 input sentences in vertical format.
      The tag format is the Prague Dependency Treebank positional tagset for Czech and the Penn Treebank tagset for English
    • data/trees/pdt2_etest or ud21_gum_dev - 100 gold standard dependency trees from the Prague Dependency Treebank or the Universal Dependencies GUM corpus
    • data/trees/set_pdt2_etest or set_ud21_gum_dev - 100 trees output from SET by running make set_trees
    • grammar-cs.set or grammar-en.set - the grammar used in running SET

Assignment

  1. Study the SET documentation. The tags used in the English grammar-en.set follow the Penn Treebank tagset and in the Czech grammar grammar-cs.set the Brno tagset.
  2. Develop better grammar - repeat the process:
    nano grammar-en.set # or use your favourite editor
    make set_trees
    make compare
    
    to improve the original UAS
  3. Write the final UAS in grammar-cs.set or grammar-en.set
    # This is the SET grammar for English used in IA161 course
    # 
    # ===========   resulting UAS =  66.9 %  ===================
    
  4. Upload your grammar-cs.set or grammar-en.set to the homework vault.
Last modified 6 months ago Last modified on Nov 6, 2023, 11:00:46 PM

Attachments (2)

Download all attachments as: .zip