11 | | The language models generate text regardless of factual correctness, which means that they may produce wrong, misleading or biased output. Some bias is deeply rooted in the training data, which are heavily unbalanced concerning genre and domain, as well as writers' gender, age and cultural background. In some applications, the bias may cause harmful outputs. |
| 11 | Assistant models, such as ChatGPT, are built on this foundation, making them adept at understanding and generating human-like responses. A key aspect of using these models effectively is *prompt engineering*, which involves crafting well-structured inputs to guide the model's behavior and improve the quality of its output. |
| 12 | |
21 | | We will be working with the [[https://colab.research.google.com/drive/19wZxHV6GLsRNvTdfVWbSK_vaoyEECHLj#scrollTo=PVXofXV4Ft7z|Google Colab Notebook]]. First, we load the GPT2-Large model and experiment with generating the text. To receive a more objective view of the probabilities of the following tokens, we adjust the generating function to give us the top k most words and their probabilities. Then, we learn how to calculate the perplexity of a given text. |
22 | | |
23 | | **Task 1: Exploring perplexity** |
24 | | |
25 | | Generate various text samples using different temperatures. Observe the relationship between temperature (parameter of the generator) and perplexity of the resulting text. |
26 | | |
27 | | **Task 1: Exploring bias** |
28 | | |
29 | | We will experiment with several prompts/seeds that are likely to produce biased output. |
30 | | |
31 | | Your task will be to design more seeds and generate text or get predictions of subsequent words. Then, annotate the predictions (positive/negative/neutral), and answer the following questions: |
32 | | |
33 | | * To which groups the GPT2 model outputs exhibit positive bias? |
34 | | * To which groups the GPT2 model outputs exhibit negative bias? |
35 | | * Was there anything you expected to be biased, but the experiments showed fairness in the model outputs? |
36 | | * On the contrary, was there anything you expected to be fair, but the model showed bias? |
37 | | |
| 22 | We will be working with the [[https://colab.research.google.com/drive/1mFZDm28NbKy5oe-ltK6nnnFpIsSoT9U7?usp=sharing|Google Colab Notebook]]. The task consists of experiments with prompting a assistant model for solving the sentiment analysis task and math problem tasks. |