[1] | [2] | [3] | [4] | [5] | [6] | |
---|---|---|---|---|---|---|
Introduction | (1. Components of a NL Grammar) |
1. Introduction |
1. Introduction |
— |
1. Introduction |
1. Classical Approaches to NLP |
Evaluation | 10.9 |
5.7, 9.8, 23.7 & 24.4.2 |
5.3 Hypothesis testing 8.1 Evaluation Measures |
8. Evaluation in IR |
4.4 Evaluating Classifiers 4.5 Building Datasets |
— |
Tokens, words and Language Models | — |
todo |
todo |
todo |
6.1 N-Gram Language Models Smoothing and Discounting |
2.1-2.3 Text Preprocessing |
PoS tagging | 8.4 PoS Tagging & Lemmatization |
5. PoS Tagging 6. HM & ME Models |
3.1 PoS & Morphology 10. PoS Tagging 9. Markov Models |
— |
7. Sequence Labeling |
10. PoS Tagging |
HMM decoding and learning | todo |
todo |
todo |
todo |
7.3 The Viterbi Algorithm 7.4 Hidden Markov Models 8.1 Part-of-Speech Tagging 5.2 Applications of Expectation-Maximization |
todo |
Text Classification | — |
— |
14. Clustering 16. Text Categorization |
13. Text classif. & Naive Bayes 14. VS classification 15. SVM & ML on documents 16. Flat clustering 17. Hierarchical clustering 18. Matrix decomposition & LSI |
2. Linear Text Classification |
— |
Vector space Semantics (and Information Retrieval) | 10.5 to 10.8 |
23.1 Information Retrieval |
15. Topics in IR |
6. Scoring, term weighting, ... 7. Computing scores... 8. Evaluation in IR |
14.1 The Distributioal Hypothesis 14.2 Design Decisions for Word Representations |
19. Information Retrieval |
Lexical Semantics | 10.11 Lexical Semantics in LE 10.14 Lex. Sem. Resources |
19. Lexical Semantics 20. Computational Lex. Sem. |
3.3 Semantics & Pragmatics |
— |
todo |
Semantic Analysis: 5.1, 5.3 |
Neural Networks approaches to NLP (inc. Deep-Learning) | — |
todo |
todo |
todo |
3. Nonlinear Classification 6.3 Recurrent Neural Network Language Models 14.5 Neural Word Embeddings |
todo |
Generation | — |
todo |
todo |
todo |
19. Text Generation |
todo |
Modern NLP | — |
todo |
todo |
todo |
todo |
todo |