• Vítejte na stránkách NLP Centra!
  • Zapojte se do vývoje softwarových nástrojů!
  • Analýza přirozeného jazyka
  • Vyzkoušejte si korpusy o velikosti knihoven online!
  • Studujte jednu ze specializací!
  • Členové laboratoře

Generative Language Models

IA161 NLP in Practice Course, Course Guarantee: Aleš Horák

Prepared by: Tomáš Foltýnek

State of the Art

Generating the text is - in principle - the same as predicting the following word. Given the seed (typically a start of the text), the model can predict/generate a following word that fits the context. Current state-of-the-art models are based on the Generative Pre-trained Transformer (GPT) architecture, which uses a multi-head attention mechanism to capture contextual features. The models contain several attention blocks to perform higher-order cognitive tasks.

The language models generate text regardless of factual correctness, which means that they may produce wrong, misleading or biased output. Some bias is deeply rooted in the training data, which are heavily unbalanced concerning genre and domain, as well as writers' gender, age and cultural background. In some applications, the bias may cause harmful outputs.


  1. Vaswani, A. et al. (2017): Attention Is All You Need. ArXiv preprint: https://arxiv.org/abs/1706.03762
  2. Radford, A. et al. (2018): Improving Language Understanding by Generative Pre-Training. OpenAI: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf
  3. Navigli, R., Conia, S., & Ross, B. (2023). Biases in Large Language Models: Origins, Inventory, and Discussion. J. Data and Information Quality, 15(2). https://doi.org/10.1145/3597307

Practical Session

We will be working with the Google Colab Notebook. First, we load the GPT2-Large model and experiment with generating the text. To receive a more objective view of the probabilities of the following tokens, we adjust the generating function to give us the top k most words and their probabilities. Then, we learn how to calculate the perplexity of a given text.

Task 1: Exploring perplexity

Generate various text samples using different temperatures. Observe the relationship between temperature (parameter of the generator) and perplexity of the resulting text.

Task 1: Exploring bias

We will experiment with several prompts/seeds that are likely to produce biased output.

Your task will be to design more seeds and generate text or get predictions of subsequent words. Then, annotate the predictions (positive/negative/neutral), and answer the following questions:

  • To which groups the GPT2 model outputs exhibit positive bias?
  • To which groups the GPT2 model outputs exhibit negative bias?
  • Was there anything you expected to be biased, but the experiments showed fairness in the model outputs?
  • On the contrary, was there anything you expected to be fair, but the model showed bias?