= Topic identification, topic modeling = [[https://is.muni.cz/auth/predmet/fi/ia161|IA161]] [[en/NlpInPracticeCourse|NLP in Practice Course]], Course Guarantee: Aleš Horák Prepared by: Zuzana Nevěřilová, Adam Rambousek, Jirka Materna == State of the Art == Topic modeling is a statistical approach for discovering abstract topics hidden in text documents. A document usually consists of multiple topics with different weights. Each topic can be described by typical words belonging to the topic. The most frequently used topic modeling methods are Latent Semantic Analysis and Latent Dirichlet Allocation. === References === 1. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993 – 1022, 2003. 1. Curiskis, S. A., Drake, B., Osborn, T. R., and Kennedy, P. J. (2020). An evaluation of document clustering and topic modelling in two online social networks: Twitter and reddit. Information Processing & Management, 57(2):102034. 1. Röder, M., Both, A., and Hinneburg, A. (2015). Exploring the space of topic coherence measures. In Proceedings of the eighth ACM international conference on Web search and data mining, pages 399–408. https://dl.acm.org/doi/abs/10.1145/2684822.2685324 == Practical Session == In this session, we will use [[http://radimrehurek.com/gensim/|Gensim]] to model latent topics of Wikipedia documents. We will focus on Latent Semantic Analysis and Latent Dirichlet Allocation models. 1. Create ``, a text file named ia161-UCO-07.txt where UCO is your university ID. 1. Train LSA and LDA models of the corpus for various topics using Gensim. Use [[https://colab.research.google.com/drive/19eaNzohQrukz-gepu6bKuBr7AXUIA1Ct?usp=sharing|Google Colab]]. 1. Follow instructions in the Colab. 1. Save it into a `` and submit it to the homework vault (Odevzdavarna).