{{{#!html }}} = What do we work on in the NLP Lab? = Try some of our language tools: * [http://prirucka.ujc.cas.cz/ The Online Language Handbook] * [http://nlp.fi.muni.cz/cz_accent/ CZ accent][[BR]]''for adding diacritics'' * [http://nlp.fi.muni.cz/~xpopelk/xplain/ X-Plain][[BR]]''the Activity game with a computer'' * [http://nlp.fi.muni.cz/projekty/wwwajka/ Ajka][[BR]]''morphological analyzer'' * [http://nlp.fi.muni.cz/projekty/wwwsynt/query.cgi Synt] and [http://nlp.fi.muni.cz/projekty/set/ SET][[BR]]''syntactic analyzers'' The [https://nlp.fi.muni.cz/en/nlplab Natural Language Processing Centre] focuses on obtaining practical results in the field of information technologies and linguistics. Results of the projects are frequently published at various conferences, the NLP Centre also cooperates with similarly oriented institutes in Czech Republic and abroad, and offers students the possibility to participate in student exchange with partner universities abroad. More detailed information follows below, grouped into chapters according to their topic: || [[en/MainTopics#corp Corpora]] || || [[en/MainTopics#dict Dictionaries]] || || [[en/MainTopics#morph Morphology]] || || [[en/MainTopics#syntan Syntactic Analysis]] || || [[en/MainTopics#semant Semantics]] || == Corpora == #corp [[Image(/trac/research/raw-attachment/wiki/en/MainTopics/corpora.png)]] Corpus is a collection of text data in electronic form. As a significant source of linguistic data, corpora make it possible to investigate many frequency-related phenomena in language, and nowadays they are an indispensable tool in NLP. In addition to corpora containing general texts, corpora for specific purposes are also produced, such as annotated, domain-specific, spoken or error corpora. Corpora are used for investigation and development of natural language grammars. They are further helpful when developing a grammar checker, choosing entries for a dictionary or as a data source for automatic text categorization based on machine learning. Parallel corpora comprise of identical texts in various languages. They are used especially in word sense disambiguation and machine translation. Nowadays the main source of corpus texts is the World Wide Web. To obtain quality data on a larger scale, pre-processing tools for filtering undesired content need to be used: notably the '''jusText''' tool for removing boilerplate, the'''onion''' tool for removing duplicate text parts, or the '''chared''' utility for detecting text encoding. Very useful is also the popular '''gensim''' framework for extracting semantic topics from documents. The NLP Centre has produced a complete set of tools for creating and managing corpora, the '''Corpus Architect'''. It can store and manage corpora containing 100+ billion word tokens. [[Image(/trac/research/raw-attachment/wiki/en/MainTopics/metatrans.png)]] ''Related projects:'' * [http://nlp.fi.muni.cz/projekty/bonito/ Bonito] * [http://ske.fi.muni.cz/ Corpus Architect] * [http://www.sketchengine.co.uk/ Word Sketch Engine] * [http://nlp.fi.muni.cz/projekty/cpa/ CPA] * [http://nlp.fi.muni.cz/projekty/justext/ jusText] * [http://code.google.com/p/onion/ onion] * [http://code.google.com/p/chared/ chared] * [http://radimrehurek.com/gensim/index.html Gensim] ([https://nlp.fi.muni.cz/en/main_topics#guidepost back to the list of topics]) == Dictionaries == #dict [[Image(/trac/research/raw-attachment/wiki/en/MainTopics/debII_slovniky.png, align=right)]] Dictionaries have always been a fundamental part of every linguist's basic equipment. However, handling paper dictionaries is rather inconvenient. Therefore, one of the first projects of the NLP Centre was to digitize classic dictionaries of Czech and develop a set of advanced tools for processing lexicographic data, a so-called lexicographer's workbench. This term refers to a system that enables each expert user to easily access various linguistic resources and provides them with an application interface for searching and editing data. One of our projects related to dictionaries is the development of '''the DEB platform''', offering all the above mentioned features, thanks to its client-server architecture. One of the client applications is the '''DEBDict''' dictionary viewer, which contains apart from digitized dictionaries also several encyclopediae, and an onomastic and phraseological dictionary. Applications for DEB are developed in the XUL language and are available as extensions for the Firefox web browser. ''Related projects:'' * [http://nlp.fi.muni.cz/projekty/deb2/ DEB II] * [http://nlp.fi.muni.cz/projekty/deb2/debdict/ DEBDict] * [http://nlp.fi.muni.cz/projekty/deb2/#debvisdic DEBVisDic] * [http://nlp.fi.muni.cz/publications/slovko2005_ydana_hales/slovko2005_ydana_hales.pdf Verbalex] * [http://metatrans.fi.muni.cz/ MetaTrans] * [http://nlp.fi.muni.cz/projekty/cpa/ CPA] ([https://nlp.fi.muni.cz/en/main_topics#guidepost back to the list of topics]) == Morphology == #morph [[Image(/trac/research/raw-attachment/wiki/en/MainTopics/majka_nlpportal.png, align=right)]] Morphological analysis gives a basic insight into natural language by studying how to distinguish and generate grammatical forms of words arising through inflection (ie. declension and conjugation). This involves considering a set of tags describing the grammatical categories of the word form concerned, most notably, its base form (lemma) and paradigm. Automatic analysis of word forms in free text can be used for instance in grammar checker development, and can aid corpus tagging, or semi-automatic dictionary compiling. The NLP Centre has produced a general morphological analyzer for Czech, '''ajka''', which covers vocabulary of over 6 million word forms. It further served as a base for a similar analyzer for Slovak, the '''fispell'''grammar-checker, the '''czaccent''' converter of ascii text to text with diacritics, and an interactive interface for the IM Jabber protocol. ''Related projects:'' * [http://nlp.fi.muni.cz/projekty/ajka/ Ajka] * [http://nlp.fi.muni.cz/ma/free.html Fajka (the analyzer with free data)] * [http://nlp.fi.muni.cz/cz_accent/ CZ accent] ([https://nlp.fi.muni.cz/en/main_topics#guidepost back to the list of topics]) == Syntactic Analysis == #syntan [[Image(/trac/research/raw-attachment/wiki/en/MainTopics/synt_tree.png​, align=right)]] The goal of syntactic analysis is to determine whether the text string on input is a sentence in the given (natural) language. If it is, the result of the analysis contains a description of the syntactic structure of the sentence, for example in the form of a derivation tree. Such formalizations are aimed at making computers "understand" grammar of natural languages. Syntactic analysis can be utilized for instance when developing a punctuation corrector, dialogue systems with a natural language interface, or as a building block in a machine translation system. Czech is a language exhibiting rich inflection and free word order and thus belongs to the languages that are very hard to analyze, as it requires more grammar rules than most other languages. The NLP Centre is developing several syntactic analyzers. The '''synt''' syntactic analyzer is based on a handcraftedCzech meta-grammar enhanced by semantic actions and contextual constraints. '''SET''' is a popular lightweightsyntactic analyzer based on set of patterns. Both '''synt''' and '''SET''' perform syntactic analysis of Czech sentences with an accuracy close to 90%. For educational purposes we have a simple syntactic analyzer '''Zuzana'''. ''Related projects:'' * [http://nlp.fi.muni.cz/projekty/wwwsynt/ Synt] * [http://nlp.fi.muni.cz/projekty/set/ SET] * [http://nlp.fi.muni.cz/projekty/zuzana/ Zuzana] ([https://nlp.fi.muni.cz/en/main_topics#guidepost back to the list of topics]) == Semantics == #semant [[Image(/trac/research/raw-attachment/wiki/en/MainTopics/dict2_small.png, align=left)]] Semantic and pragmatic analysis make up the most complex phase of language processing as they build up on results of all the above mentioned disciplines. The ultimate touchstone on this level is machine translation, which hasn't been implemented for Czech with satisfactory results yet. One of the long-term projects of the NLP Centre is the use of'''Transparent Intensional Logic (TIL)''' as a semantic representation of knowledge and subsequently as a transfer language in automatic machine translation. At the current stage, it is realistic to process knowledge in a simpler form - considerably less complex tasks have been addressed, such as machine translation for a restricted domain (eg. official documents and weather reports), or semi-automatic machine translation between close languages. The resources exploited in these applications are corpora, semantic nets, and electronic dictionaries. In the field of representation of meaning and knowledge we shall mention the notable contribution of NLP Centre members to the '''EuroWordNet''' and '''Balkanet''' projects, which were aimed at building a multilingual '''WordNet'''-like semantic net. ''Related projects:'' * [http://nlp.fi.muni.cz/projekty/deb2/#debvisdic DEBVisDic] * [http://www.fi.muni.cz/~hales/disert/ Logical Analysis of Czech Sentences in TIL] * [http://nlp.fi.muni.cz/projekty/vizualni_lexikon/ Visual Browser] * [http://radimrehurek.com/gensim/index.html Gensim] ''Animated demonstration of the Visual Browser:'' [[Image(/trac/research/raw-attachment/wiki/en/MainTopics/vl_anim.gif)]] == Further information == #Further_information * [http://nlp.fi.muni.cz/projekty/ List of selected NLPlab projects] * [https://nlp.fi.muni.cz/nlpis/baliky.php?lang=en&type=free Currently offered thesis topics]