CS260: Machine Learning Theory
Monday/Wednesday, 2-3:50pm, Boelter 5422
Instructor: Jenn Wortman Vaughan

Recent Announcements

Course Overview

This course will provide a broad overview of the theoretical foundations underlying common machine learning algorithms. We will cover the following topics, though not necessarily in this order:

Who should take this course? This course is primarily intended for students with some background and interest in machine learning who would like to know more about the theoretical foundations, and students with a background and interest in theoretical computer science who would like to know more about machine learning. Although this course has no official prerequisites, students who enroll must have a basic knowledge of probability (roughly at the level of the first half of 112) and must be comfortable reading and writing formal mathematical proofs.

Who should not take this course? This course is not intended for students who are primarily interested in applications of machine learning. There is no programming required, and we won't spend much time talking about implementation. If you would prefer a more applied view of machine learning, consider taking 276A.

This course does count for credit for the AI major and minor field requirements for computer science Ph.D. students.

Staff Contact Info

Instructor: Jenn Wortman Vaughan
Office Hours: Friday, 2-3pm, 4532H Boelter Hall
Contact: jenn at cs

Grader: Jacob Mathew
Contact: jacobgmathew at gmail

Breakdown of Grades

Grades will be based on the following components:

If you believe that a mistake was made in the grading of a homework assignment, you may submit a request for a regrade. This request must be submitted in writing, and must include a clear explanation of the reason you believe you should have received more points. All regrade requests must be received within one week.

Schedule & Lecture Notes

This schedule is tentative and likely to shift as we get further into the material. Lecture notes will be posted here as the quarter progresses.

Part 1: Classification and the PAC Model

Part 2: Online Learning in Adversarial Settings

Part 3: Some Practical Successes of Learning Theory

Part 4: Wrapping Things Up

Final projects will be due within a few days of the last day of class.

Additional Reading

In addition to the lecture notes that will be provided, students may find the following textbooks useful for exploring some of the topics that we cover in more depth. In particular, they provide excellent coverage of the PAC learning model and the material we will cover during the first segment of the course:

Several copies of the Kearns and Vazirani book will be on hold at the Science and Engineering Library. Please take advantage of these.

You might also find it helpful to look through lecture notes and slides from similar courses that have been offered at other universities such as Avrim Blum's course at CMU or Rob Schapire's course at Princeton. Links to specific notes from other courses that are especially relevant to particular lectures are included in the schedule above.

Piazza Message Board

As an experiment, we will try using a Piazza message board for class discussions this quarter. UCLA students can sign up by following the instructions here. Using Piazza is recommended, but not required.

Good uses of Piazza include:

The course Academic Honesty Policy must be followed on the message boards at all times. Do not post or request homework solutions! And please be polite.

Homework Assignments

Homework assignments will be made available here:

All homework solutions must be typed. You are strongly encouraged (though not required) to use LaTeX for your assignments. LaTeX makes it simple to typeset mathematical equations, and is extremely useful for grad students to know. Most of the academic papers you read were written with LaTeX, and probably most of the textbooks too.

If you are new to LaTeX, many good tutorials can be found online. A good reference for many common symbols is available here, or try Detexify, a cool machine learning tool that automatically finds the symbols you need. If you are a Mac user and would like to install LaTeX on your own machine, I highly recommend the bundled version of TeXShop and MacTeX available here. Windows users can try TeXnicCenter or MiKTeX. LaTeX is also installed on department-run linux machines.

Academic Honesty Policy

Collaboration on the homework assignments is encouraged! Discussing the problems with others can help you learn. Students are free to discuss the homework problems with anyone in the class under the following conditions:

  1. Each student must write down his or her solutions independently, and must understand the solutions he or she writes down. Talking over solutions is fine, but reading or copying another student's answers is not acceptable!
  2. Each student must write a list of all of his or her collaborators at the top of each assignment. This list should include anyone with whom the assignment was discussed.

These policies are described in more detail in the Academic Honesty Policy that must be signed by every student in the class. The Dean of Students also has a a guide to Academic Integrity.

Final Projects

A full description of the final project is given in these final project guidelines. The guidelines provide details on the types of projects you can pursue, the grading criteria for each project type, and all key dates and milestones. Be sure to read the guidelines carefully and make a note of the deadlines!

Below are some suggestions for topics that could be explored in more detail for the final project, and a couple of examples of relevant papers for each topic. All of these would make good literature synthesis topic, though you should be prepared to find additional relevant papers on your own! (Google Scholar is a good place to start.) This list might also inspire ideas for research projects. Of course you are welcome to choose a topic that is not on the list.

Active Learning:

The Multi-Armed Bandit Problem:

Domain Adaptation and/or Multi-Source Learning:

Privacy-Preserving Machine Learning:

Relationships Between Markets and Learning:

Learning from a Crowd:

Learning Bounds for Reinforcement Learning:

Online Convex Optimization:

Machine Learning for Finance:


PAC-Bayes Bounds:


If you would like to suggest your own topic, try flipping through some recent publications from COLT ( 2011, 2010, 2009) to get a sense of the latest research in learning theory. You might also find a bit of theoretical work and some interesting applications that inspire you in the proceedings of NIPS ( 2011, 2010), ICML (2011, 2010), or UAI (2011, 2010).