Location
CS2311
Event Description

Dr. Michael Collins http://www.cs.columbia.edu/~mcollins/

Title: Provable Machine Learning Methods for Natural Language Processing

  Abstract: Over the last couple of decades, there has been remarkable progress in natural language processing (NLP), largely due to the advent of statistical and machine learning methods. In this talk I will describe recent work on "provable" machine learning methods for several NLP problems: that is, algorithms that come with fairly strong guarantees of efficiency or correctness.   The first topic I'll cover is the use of Lagrangian relaxation, and dual decomposition, for inference in NLP. Lagrangian relaxation is a method for combinatorial optimization going back to seminal work by Held and Karp (1970) on the traveling salesman problem. I'll describe applications to various inference problems in NLP, including parsing and machine translation.   The second topic I'll cover is spectral learning for latent-variable models in NLP. Latent-variable models have widespread application in NLP, speech, vision, and other fields. The EM algorithm is a hugely successful parameter estimation method for latent-variable models, but has relatively weak guarantees. I'll describe recent work on spectral learning algorithms that have strong statistical guarantees, and are in practice much more efficient than EM.   The first part of the talk covers joint work with Yin-Wen Chang, Tommi Jaakkola, Terry Koo, Sasha Rush, and David Sontag. The second part of the talk includes joint work with Shay Cohen, Karl Stratos, Dean Foster, and Lyle Ungar.

Bio: Before joining Columbia University in 2011 as a full professor, Michael was an assistant/associate professor at MIT (2003—2010), and a researcher at AT&T (1999—2002), and a phd from uPenn in 1998.

His research interests lie in Natural Language Processing and Machine Learning. In addition to traditional NLP tasks such as parsing and machine translation, he has also worked on interesting AI problems such as “learning the mapping between natural language and logical forms” and “care-factor diagrams for structured probabilistic modeling”. He is the Collins as in “Collins Parser”, which was based on his phd thesis. He has developed a number of influential algorithms for structural learning in NLP and AI, some of which led to best paper awards at EMNLP 2002, EMNLP 2004, UAI 2004, UAI 2005, CoNLL 2008, EMNLP 2010.