CSE 562/662 - Natural Language Processing

Instructors: Brian Roark and Kristy Hollingshead

Class time: Mon/Wed 4:00 - 5:30 PM    Jan. 4 - Mar. 17, 2010

Class location: Wilson Clark Center - Room 403

Videoconferenced to OHSU's Marquam Hill Campus:
Biomedical Information Communication Center (BICC), Room 131b
(except on Jan.20 and Feb.24, when we will broadcast to the Biomedical Research Building (BRB), Room 403)

Office hours: Wed 11:00 AM - 1:00 PM, Central 129 (Kristy), or by appointment

Required textbook: None

Optional (suggested) textbook: Roark and Sproat, Computational Approaches to Morphology and Syntax

Also useful supplementary textbook: Jurafsky and Martin, Speech and Language Processing

Skip to overview of lectures.


The goal of this course is to give a broad but detailed introduction to the key algorithms and modeling techniques used for Natural Language Processing (NLP) today. With a few exceptions, NLP involves taking a sequence of words as input (e.g. a sentence) and returning some annotation(s) for that string. Well-known examples of this include part-of-speech tagging and syntactic parsing. Many other common tasks, e.g. shallow parsing or named-entity recognition, can be easily recast as tagging tasks; hence certain basic techniques can be widely applied within NLP. Applications such as automatic speech recogntion, machine translation, information extraction, and question answering all make use of NLP techniques. By the end of this course, you should understand how to approach common natural language problems arising in these and other applications.


There is no official programming language for this course, but there will be a fair amount of programming required to complete assignments, hence facility with some programming language is assumed.


70% of your grade will depend on the homeworks, 15% on the final project, 10% on the final exam, and 5% on in-class participation.

What we'll cover and an approximate schedule

Roughly speaking, half of the course will be devoted to finite-state methods, and half to context-free methods (or beyond). Algorithms for annotating linguistic structure will always be presented with statistical variants, which provide the basis for disambiguation.

Date Topic Reading Assignment FAQs Video
  Finite-state/tagging models
1/4 Introduction to NLP; overview of class structure;
homework and term project options; tutorial options
1/6 Dynamic programming for finite-state models: Viterbi search sections 6.3 & 6.3.1
of Roark&Sproat
HW1  FAQ1   
1/11 More dynamic programming: Forward-Backward search;
other finite-state tagging tasks
sections 6.3.3 & 6.4
of Roark&Sproat
1/13 Language modeling CG98      
1/18 No class -- MLK Jr Day        
1/20 Dependency parsing    HW2     
1/25 Ambiguity on input and output; n-best lists and lattices;
Pipeline systems & intro to our text processing "pipeline"
1/27 Alignment in Machine Translation        
2/1 Graph-based methods in NLP OER05; Mil05      
  Context-free/parsing models
2/3 Context-free grammars (CFGs)        
2/8 Context-free parsing: CYK    HW3   FAQ3   
2/10 More context-free parsing        
2/15 No class -- Presidents' Day        
2/17 Bi-text parsing Chi05; CKtut06      
2/22 Advanced context-free parsing: grammar transformations    HW4     
  Beyond context-free; semantics and meaning
2/24 Context-sensitive grammars: Unification, TAG, CCG        
3/1 Machine learning in NLP (for sequence processing) Col02; DFL07; SP03;
LMP01; SMtut06
3/3 HW4 student-presentations
Semantics: WSD, NP coreference, semantic role labeling
   HW5   FAQ5   
3/8 Information Retrieval (IR) and Question Answering (QA) CH08; RH02      
3/10 Co-reference resolution, Information Extraction (IE), and
Automatic Summarization
3/15 Class summary; surveying the state-of-the-art: existing systems
and system competitions, likely future research directions
3/17 Final Exam    Final Projects

CG98     Stanley Chen and Joshua Goodman. An empirical study of smoothing techniques for language modeling. Technical report TR-10-98, Harvard University, August 1998.
Chi05     David Chiang. A hierarchical phrase-based model for statistical machine translation. Proceedings of the Annual Meeting of the ACL, pp. 263-270, 2005.
CKtut06     David Chiang and Kevin Knight. An introduction to synchronous grammars: part of a tutorial given at ACL 2006.
CH08     K. Bretonnel Cohen and Lawrence Hunter. Getting started in text mining. PLoS Computational Biology, 4(1), 2008.
Col02     Michael Collins. Discriminative training methods for Hidden Markov Models: theory and experiments with perceptron algorithms. Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1-8, 2002.
DFL07     Dina Demner-Fushman and Jimmy Lin. Answering Clinical Questions with Knowledge-Based and Statistical Techniques. Computational Linguistics, 33(1):63-103, 2007.
GKM05     Trond Grenager, Dan Klein and Chris Manning. Unsupervised Learning of Field Segmentation Models for Information Extraction. Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL), pp. 371-378, 2005.
HR07     Kristy Hollingshead and Brian Roark. Pipeline Iteration. Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL), pp. 952-959, 2007.
LMP01     John Lafferty, Andrew McCallum, and Fernando Pereira. Conditional Random Fields: probabilistic models for segmenting and labeling sequence data. Proceedings of the International Conference on Machine Learning (ICML), 2001.
Mil05     Rada Mihalcea. Unsupervised Large-Vocabulary Word Sense Disambiguation with Graph-based Algorithms for Sequence Data Labeling. Proceedings of HLT-EMNLP, 2005.
OER05     Jahna Otterbacher, Gunes Erkan and Dragomir R. Radev. Using Random Walks for Question-focused Sentence Retrieval. Proceedings of HLT-EMNLP, 2005.
RH02     Deepak Ravichandran and Eduard Hovy. Learning Surface Text Patterns for a Question Answering System. Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pp. 41-47, 2002.
SP03     Fei Sha and Fernando Pereira. Shallow parsing with conditional random fields. Proceedings of the HLT-NAACL Annual Meeting, pp. 134-141, 2003.
Norm01     Richard Sproat, Alan Black, Stanley Chen, Shankar Kumar, Mari Ostendorf, and Christopher Richards. Normalization of non-standard words. Computer Speech and Language, 15(3):287-333, 2001.
SMtut06     Charles Sutton and Andrew McCallum. An introduction to Conditional Random Fields for relational learning. Book chapter in Introduction to Statistical Relational Learning. Edited by Lise Getoor and Ben Taskar. MIT Press, 2006.