Changes between Initial Version and Version 1 of en/NlpInPracticeCourse/2021/NamedEntityRecognition


Ignore:
Timestamp:
Aug 30, 2022, 10:39:57 AM (5 months ago)
Author:
Ales Horak
Comment:

copied from private/NlpInPracticeCourse/NamedEntityRecognition

Legend:

Unmodified
Added
Removed
Modified
  • en/NlpInPracticeCourse/2021/NamedEntityRecognition

    v1 v1  
     1= Named Entity Recognition =
     2
     3[[https://is.muni.cz/auth/predmet/fi/ia161|IA161]] [[en/NlpInPracticeCourse|NLP in Practice Course]], Course Guarantee: Aleš Horák
     4
     5Prepared by: Zuzana Nevěřilová
     6
     7== State of the Art ==
     8
     9NER aims to ''recognize'' and ''classify'' names of people, locations, organizations, products, artworks, sometimes dates, money, measurements (numbers with units), law or patent numbers etc. Known issues are ambiguity of words (e.g. ''May'' can be a month, a verb, or a name), ambiguity of classes (e.g. ''HMS Queen Elisabeth'' can be a ship), and the inherent incompleteness of lists of NEs.
     10
     11Named entity recognition (NER) is used mainly in information extraction (IE) but it can significantly improve other NLP tasks such as syntactic parsing.
     12
     13=== Example from IE ===
     14
     15|| In 2003, Hannibal Lecter (as portrayed by Hopkins) was chosen by the American Film Institute as the number one movie villain. ||
     16
     17Hannibal Lecter <-> Hopkins
     18
     19=== Example concerning syntactic parsing ===
     20
     21|| Wish You Were Here is the ninth studio album by the English progressive rock group Pink Floyd. ||
     22
     23vs.
     24
     25|| Wish_You_Were_Here is the ninth studio album by the English progressive rock group Pink Floyd. ||
     26
     27=== References ===
     28
     29 1. Charles Sutton and Andrew !McCallum: An Introduction to Conditional Random Fields. Foundations and Trends in Machine Learning 4 (4). 2012. [[http://homepages.inf.ed.ac.uk/csutton/publications/crftut-fnt.pdf]]
     30 1. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding, 2019. [[https://arxiv.org/abs/1810.04805]]
     31 1. Xinyu Wang, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu: Automated Concatenation of Embeddings for Structured Prediction. Accepted to Proceedings of ACL-IJCNLP 2021. 17 pages.
     32[[https://arxiv.org/abs/2010.05006]]
     33
     34
     35== Practical Session ==
     36
     37=== Czech Named Entity Recognition ===
     38
     39In this workshop, we train a new NER application for the Czech language. We work with free resources & software tools: the Czech NE Corpus (CNEC) and the !FastText pre-trained word embeddings. We build a neural network to solve the problem.
     40
     411. Create `<YOUR_FILE>`, a text file named `ia161-UCO-04.txt` where ''UCO'' is your university ID.
     421. Open Google Colab at [[https://colab.research.google.com/drive/1mnz-P30CLxrxQ0yyqpcLwVJgi7e59shi?usp=sharing]]
     431. Follow the instructions in the notebook. There are three obligatory tasks. Write down your answers to `<YOUR_FILE>`.