I am an FDS Postdoctoral Fellow at Yale University. Before that, I was a PhD student in the Computer Science Department of the National Technical University of Athens (NTUA) working with Dimitris Fotakis and Christos Tzamos. I completed my undergraduate studies in the School of Electrical and Computer Engineering Department of the NTUA.
I work on statistical and computational learning theory. My research focuses on the design of algorithms with rigorous guarantees for Machine Learning problems. I am particularly interested in:
I am on the 2025/26 job market.
Feel free to contact me at: alkis.kalavasis[at]yale.eduInstructor: Alkis Kalavasis
This course is about generalization and stability of Machine Learning (ML) systems. There are various ways to define what it means for a learning algorithm to be stable. The most standard way is inspired by sensitivity analysis, which aims at determining how much the variation of the input can influence the output of a system. This abstract way allows one to introduce various notions of stability such as uniform stability, differential privacy, and replicability. In this course, we investigate these notions of stability, their implications to learning theory, and their surprising connections.
Lecture Notes (PDF)with Andrew Ilyas, Anay Mehrotra, and Manolis Zampetakis
FOCS (2025, 2024), STOC (2025, 2024), COLT (2025), NeurIPS (2024, 2023, 2022, 2021), ICML (2023), AISTATS (2022, 2021), ICLR (2022), ITCS (2024)