CS395T, Fall 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
BUR 212, Mon & Wed 2:00 - 3:30 pm |
Instructors | Adam Klivans and Pradeep Ravikumar |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
TA | Yi-Chao Chen |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Office Hours |
Yi-Chao Chen (TA): TA Station 5, PAI 5.33, Thursdays 1:00 - 4:00 pm Pradeep Ravikumar: ACES 2.434, Fridays 3:30-5:00 pm Adam Klivans: TBD |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Overview | A central problem in machine learning is to develop algorithms that have provable guarantees in terms of both running time and number of "training" observations required. Computational Learning Theory has traditionally focused on the first issue (the computational complexity of learning algorithms) while Statistical Learning Theory has focused on the second (their statistical efficiency). In this course we will cover both these aspects, and try to understand how learning is constrained given limited computation and limited data.
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Grading | Four problem sets (3/4 of final grade), and a final paper presentation (1/4 of final grade).
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
(Optional) Textbooks/Papers |
Statistical Learning Theory: All of Statistics. Larry Wasserman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Trevor Hastie, Robert Tibshirani, Jerome Friedman. A Probabilistic Theory of Pattern Recognition. Luc Devroye, Laszlo Györfi, Gabor Lugosi. Comp. Learning Theory: An Introduction to Computational Learning Theory. Michael Kearns, Umesh Vazirani. Background on Probability and Statistics: Introduction to Probability and Statistics. Dimitri P. Bertsekas and John N. Tsitsiklis. Statistical Inference. George Casella, Roger L. Berger. Graphical Models: Graphical models, exponential families, and variational inference. M. J. Wainwright and M. I. Jordan. Foundations and Trends in Machine Learning, Vol. 1, Numbers 1--2, pp. 1--305, December 2008. Probabilistic Graphical Models: Principles and Techniques. D. Koller and N. Friedman. High-dimensional Statistics: Paper: Sharp thresholds for noisy and high-dimensional recovery of sparsity using $\ell_1$-constrained quadratic programming (Lasso). M. J. Wainwright. IEEE Transactions on Information Theory, 55:2183--2202, May 2009. Paper: A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. S. Negahban, P. Ravikumar, M. J. Wainwright and B. Yu, 2010. |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Homeworks |
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Schedule |
|