Zico Kolter Professor, Affiliated Faculty Website CMU Scholars Page Office 8002 Gates and Hillman Centers Email zkolter@cs.cmu.edu Phone (412) 268-4733 Department Machine Learning Department Computer Science Department: Affiliated Research Interests Theory Artificial Intelligence Advisees Victor Akinwande Zhengyang Geng Eungyeup Kim Yash Savani Joshua Williams Andy Zou Yixuan Xu CSD Courses Taught 15780 - Spring, 2024 Research Statement My group’s work focuses on machine learning very broadly. Much of our work aims at making deep learning algorithms more robust, safer, and understanding how data impacts how models function. To these ends, we have worked on methods for assessing and improving the robustness of deep learning systems, building complex “modules” (such as optimization solvers) within the loop of deep architectures, and building new types of architectures for deep networks Publications Preprint Accelerating Diffusion Models in Offline RL via Reward-Aware Consistency Trajectory Distillation 2025 Duan X, He Y, Tajwar F, Salakhutdinov R, Kolter JZ, Schneider J Preprint Adversarial Attacks on Robotic Vision Language Action Models 2025 Jones EK, Robey A, Zou A, Ravichandran Z, Pappas GJ, Hassani H, Fredrikson M, Kolter JZ Preprint Antidistillation Sampling 2025 Savani Y, Trockman A, Feng Z, Schwarzschild A, Robey A, Finzi M, Kolter JZ Preprint Compute-Optimal LLMs Provably Generalize Better With Scale 2025 Finzi M, Kapoor S, Granziol D, Gu A, De Sa C, Kolter JZ, Wilson AG Preprint Existing Large Language Model Unlearning Evaluations Are Inconclusive 2025 Feng Z, Xu YE, Robey A, Kirk R, Davies X, Gal Y, Schwarzschild A, Kolter JZ
Preprint Accelerating Diffusion Models in Offline RL via Reward-Aware Consistency Trajectory Distillation 2025 Duan X, He Y, Tajwar F, Salakhutdinov R, Kolter JZ, Schneider J
Preprint Adversarial Attacks on Robotic Vision Language Action Models 2025 Jones EK, Robey A, Zou A, Ravichandran Z, Pappas GJ, Hassani H, Fredrikson M, Kolter JZ
Preprint Antidistillation Sampling 2025 Savani Y, Trockman A, Feng Z, Schwarzschild A, Robey A, Finzi M, Kolter JZ
Preprint Compute-Optimal LLMs Provably Generalize Better With Scale 2025 Finzi M, Kapoor S, Granziol D, Gu A, De Sa C, Kolter JZ, Wilson AG
Preprint Existing Large Language Model Unlearning Evaluations Are Inconclusive 2025 Feng Z, Xu YE, Robey A, Kirk R, Davies X, Gal Y, Schwarzschild A, Kolter JZ