Alkis Kalavasis 📚 Google Scholar
News
  • 🗓 July 2025: Organizing the Reliable ML with Unreliable Data workshop at NeurIPS 2025 with Andrew Ilyas, Anay Mehrotra, and Manolis Zampetakis
  • 🗓 June 2025: Our paper on causal inference got the Best Paper Award at COLT 2025
  • 🗓 April 2025: Our paper on diffusion models and distribution learning got the Short Best Paper Award at the ICLR 2025 DeLTa workshop
  • 🗓 April 2025: Completed my course on Stability in Machine Learning for Spring 2025. The lecture notes can be found here

I am an FDS Postdoctoral Fellow at Yale University. Before that, I was a PhD student in the Computer Science Department of the National Technical University of Athens (NTUA) working with Dimitris Fotakis and Christos Tzamos. I completed my undergraduate studies in the School of Electrical and Computer Engineering Department of the NTUA.

I work on statistical and computational learning theory. My research focuses on the design of algorithms with rigorous guarantees for Machine Learning problems. I am particularly interested in:

  • Learning from imperfect data: designing efficient algorithms that are robust to imperfect data for problems arising in Machine Learning and Econometrics with applications to Causal Inference.
  • Generalization and stability: understanding the generalization properties of algorithms and their stability to changes in the training data (replicability, privacy, memorization, learning curves).
  • Generative modeling: proving rigorous guarantees for generative models, but also designing practical methods for diffusion and language models.

I am on the 2025/26 job market.

Feel free to contact me at: alkis.kalavasis[at]yale.edu

Recent Publications

Pre-prints

Teaching

Stability in Machine Learning: Generalization, Privacy & Replicability

Instructor: Alkis Kalavasis

This course is about generalization and stability of Machine Learning (ML) systems. There are various ways to define what it means for a learning algorithm to be stable. The most standard way is inspired by sensitivity analysis, which aims at determining how much the variation of the input can influence the output of a system. This abstract way allows one to introduce various notions of stability such as uniform stability, differential privacy, and replicability. In this course, we investigate these notions of stability, their implications to learning theory, and their surprising connections.

Lecture Notes (PDF)

Recent Talks

Service

  • Reviewing

  • FOCS (2025, 2024), STOC (2025, 2024), COLT (2025), NeurIPS (2024, 2023, 2022, 2021), ICML (2023), AISTATS (2022, 2021), ICLR (2022), ITCS (2024)