Instructors: Meisam Razaviyayn, Vatsal Sharan
Basic Information
- Lecture time: Wednesday 4:00 pm to 7:20 pm
- Lecture place: WPH 207
- TAs: TBD
- CP & Grader:
TBD
- Office Hour: Vatsal: Wednesdays 2pm-3pm (from 9/3 to 10/1) in the common space next to the elevators on the 4th floor of the GCS computer science building. Or by appointment.
- Communication: We will primarily use Slack for communication.
- Gradescope: We will use Gradescope for assignment and final project submission.
Course Description and Objectives
Optimization techniques lie at the heart of how models are trained and developed. In this course, we will explore modern considerations such as privacy, robustness and fairness, particularly from the standpoint of optimization techniques. We will both discuss recent research work on formalizing these societal requirements, and algorithmic solutions for obtaining them. Optimization-based approaches such as differentially private optimization, minimax and constrained optimization are particularly useful toolboxes for these problems, and will be explored in this context.
Recommended Preparation
Machine learning knowledge (at the level of CSCI 567, CSCI 467, or ISE 529 is sufficient) using Python. Basic
optimization knowledge, basic probability and linear algebra concepts. Mathematical maturity to read research papers.
Syllabus and Materials
The following is a tentative schedule. We will post lecture notes and assignments here. Additional related reading for all lectures will be posted on ed discussion after the lecture.
| Lecture |
Topics |
Lecture notes |
Homework |
| 1, 08/27 |
Course introduction, ML basics, Adversarial examples, Finding adversarial examples, Adversarial training
|
Lecture slides
|
|
| 2, 09/03 |
Certified robustness, randomized smoothing, data poisoning
Paper presentations:
(1) Recent Advances in Algorithmic High-Dimensional Robust Statistics (also see Robustness Meets Algorithms)
(2) Jailbreaking Black Box Large Language Models in Twenty Queries
|
Lecture slides
|
|
| 3, 09/10 |
Undetectable backdoors, tradeoffs in adversarial robustness
Paper presentations:
(1) Deliberative Alignment: Reasoning Enables Safer Language Models (briefly cover Adversarial Reasoning at Jailbreaking Time)
(2) Do ImageNet Classifiers Generalize to ImageNet?
(3) Accuracy on the Line: On the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization (briefly cover Accuracy on the wrong line: On the pitfalls of noisy data for out-of-distribution generalisation)
|
Lecture slides
|
|
| 4, 09/17 |
Robust and non-robust features, distributional robustness, introduction to algorithmic fairness
Paper presentations:
(1) Discrimination in the Age of Algorithms
(2) First-Person Fairness in Chatbots
|
Lecture slides
|
HW1
|
| 5, 09/24 |
Fairness notions in classification, individual fairness, group fairness, case study of fairness notions
Paper presentations:
(1) Performative Prediction
(2) The Value of Prediction in Identifying the Worst-Off
|
Lecture slides
|
|
| 6, 10/01 |
Inherent tradeoffs between notions, individual fairness via uncertainty quantification, multicalibration
Paper presentations:
(1) Delayed Impact of Fair Machine Learning
(2) Avoiding Discrimination through Causal Reasoning
(3) Why Language Models Hallucinate (also see Calibrated Language Models Must Hallucinate)
|
Lecture slides
|
|
| 7, 10/08 |
Review of iteration complexity analysis: smooth convex, strongly convex, and nonconvex
|
|
Project proposal due |
8, 10/15 |
Privacy and membership inference attacks
|
|
|
| 9, 10/22 |
Differential privacy and its basic properties
|
|
HW1 due |
| 10, 10/29 |
DP mechanisms and properties of DP
|
|
|
| 11, 11/05 |
DP optimization: output perturbation, objective perturbation, and exponential mechanism
|
|
|
| 12, 11/12 |
DP optimization: DP-SGD and its variants
|
|
|
| 13, 11/19 |
Project presentations
|
|
|
| 14, 12/03 |
AI Safety, Alignment
|
|
|
Requirements and Grading
- 2 homeworks worth 15% of the grade. Homeworks should be written in Latex and submitted via Gradescope.
- Mini-homeworks worth 15% of the grade. You should read presented papers before class so that you can contribute and get the most out of the presentation and discussion. Part of the course grade is also based on this via mini-homeworks. For every lecture day, you can fill out this form before 10 am on that day. More instructions are given on the form. We will drop your lowest mini-homework score for the final grade.
- Each student will be required to present a paper in class, which will be worth 15% of the grade.
- The research components will be a project proposal (5%), a project presentation (15%), and a project final report (25%). An overview of the requirements is given below, detailed instructions will be discussed later. The project will be in groups of two students.
- The goal of the project is to give you experience in research on topics in trustworthy ML. You are free to pursue a purely theoretical project, a purely empirical project, or some combination of these. You can discuss project ideas with the instructors.
- The project proposal is meant to finalize your project topic, and will be a short 1 page report.
- The project presentations will be held in class on 11/19. Your project need not be complete by this stage, but you should have made reasonable progress.
- The project final report has to be written in Latex and should be 8-9 pages long, excluding references. Part of the report should discuss the related research landscape, and the rest of it should cover your original work. Please use the LaTex template based on the NeurIPS format. The project should be written in a way such that most students in the class should be able to understand the report.
- You are free to use LLMs/generative AI tools to help you with your research, but must disclose how LLMs were used in your project report (not part of the page limit). Students still bear full responsibility for the contents of the report, including content generated by LLMs that could be construed as plagiarism or scientific misconduct (e.g., fabrication of facts).
- 5% of the grade will be based on scribing a lecture.
- 5% of the grade will be based on course attendance and participation.