= Topic identification, topic modelling = [[https://is.muni.cz/auth/predmet/fi/ia161|IA161]] [[en/NlpInPracticeCourse|NLP in Practice Course]], Course Guarantee: Aleš Horák Prepared by: Adam Rambousek, Jirka Materna == State of the Art == Topic modeling is a statistical approach for discovering abstract topics hidden in text documents. A document usually consists of multiple topics with different weights. Each topic can be described by typical words belonging to the topic. The most frequently used methods of topic modeling are Latent Semantic Analysis and Latent Dirichlet Allocation. === References === 1. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993 – 1022, 2003. 1. Curiskis, S. A., Drake, B., Osborn, T. R., and Kennedy, P. J. (2020). An evaluation of document clustering and topic modelling in two online social networks: Twitter and reddit. Information Processing & Management, 57(2):102034. 1. Yee W. Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. Hierarchical Dirichlet processes . Journal of the American Statistical Association, 101:1566 – 1581, 2006. 1. Castellanos, A., Juan Cigarrán, and Ana García-Serrano. "Formal concept analysis for topic detection: a clustering quality experimental analysis." Information Systems 66 (2017): 24-42. 1. Xie, Pengtao, and Eric P. Xing. "Integrating document clustering and topic modeling." arXiv preprint arXiv:1309.6874 (2013). 1. Röder, M., Both, A., and Hinneburg, A. (2015). Exploring the space of topic coherence measures. In Proceedings of the eighth ACM international conference on Web search and data mining, pages 399–408. == Practical Session == In this session we will use [[http://radimrehurek.com/gensim/|Gensim]] to model latent topics of Wikipedia documents. We will focus on Latent Semantic Analysis and Latent Dirichlet Allocation models. 1. Gensim is already installed on epimetheus1.fi.muni.cz and it also offers faster model processing. 1. Download and extract the corpus of Czech Wikipedia documents: [[htdocs:bigdata/wiki.tar.bz2|wiki corpus]]. 1. Train LSA and LDA models of the corpus for various numbers of topics using Gensim. You can use this template: [raw-attachment:models.py models.py]. 1. Check the coherence for various parameters. 1. For both LSA and LDA select the best model (by looking at the data or by coherence). 1. For each model, select 2 most significant topics which makes sense to you, compare with coherence score. Give them a name, save it into a text file and upload it into odevzdavarna. You can save the files in your home directory on NLP computers and they will be accessible on the server.