wiki:private/NlpInPracticeCourse/LanguageModelling

Language modelling

IA161 NLP in Practice Course, Course Guarantee: Aleš Horák

Prepared by: Pavel Rychlý

State of the Art

The goal of a language model is to assign a score to any possible input sentence. In the past, this was achieved mainly by n-gram models known since WWII. But recently, the buzzword deep learning penetrated also into language modelling and it turned out to be substantially better than Markov's n-gram models.

The current state of the art models are build on neural networks using transformers.

References

  1. Polosukhin, Illia, et al. "Attention Is All You Need". arXiv:1706.03762
  2. Alammar, Jay (2018). The Illustrated Transformer [Blog post]. Retrieved from https://jalammar.github.io/illustrated-transformer/
  3. Alammar, Jay (2018). The Illustrated GPT-2 [Blog post]. Retrieved from https://jalammar.github.io/illustrated-gpt2/
  4. Brown, Tom, et al. (2020) "Language Models are Few-Shot Learners" arXiv:2005.14165
  5. Sennrich, Rico, et al. (2106) "Neural Machine Translation of Rare Words with Subword Units", In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016

Practical Session

Technical Requirements

The task will proceed using Python notebook run in web browser in the Google Colaboratory environment.

In case of running the codes in a local environment, the requirements are Python 3.6+, jupyter notebook.

Language models from scratch

In this workshop, we create a language models for English and/or any other language from own texts. The models use only small python modules with PyTorch? framework.

We generate random text using these models. The first model is based only on characters, later one uses subword tokenization with BPE.

Access the Python notebook in the Google Colab environment. Do not forget to save your work if you want to see your changes later, leaving the browser will throw away all changes!

OR

download the notebook or plain python file from the shared notebook (File > Download) and run in your local environment.

Training data

  1. R.U.R., a play by Josef Capek (155 kB) https://gutenberg.org/files/59112/59112-0.txt
  2. Small text for fast setup: *1984 book* from Project Gutenberg (590 kB) https://gutenberg.net.au/ebooks01/0100021.txt
  3. Shakespeare plays (1.1 MB) https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt
  4. Any other data, any language (even programming languages)

Tasks

Task 1

Generate text using character-level neural LM.

Use several different hyper-parameters (embedding size, number of layers, number of epochs). Describe the quality of generated text with regard to selected parameters.

Task 2

Implement a new Dataset class to Use subwords (via SentencePiece) instead of character. Compare the generated text with the text generated by character-level model with the same number of parameters.

Upload

Upload your modified notebook or python script with results to the homework vault (odevzdávárna).

Last modified 6 months ago Last modified on Sep 26, 2023, 9:27:48 AM