Alkis Kalavasis

Your Image

Hi! I am an FDS Postdoctoral Fellow at Yale University. Before that, I was a PhD student in the Computer Science Department of the National Technical University of Athens (NTUA) working with Dimitris Fotakis and Christos Tzamos. I completed my undergraduate studies in the School of Electrical and Computer Engineering Department of the NTUA, where I was advised by Dimitris Fotakis.

Research: I work on statistical and computational learning theory. My research focuses on the design of algorithms with rigorous guarantees for machine learning problems. I am interested in the design of algorithms that are robust to data corruptions (adversarial manipulations, censoring and systematic errors) and algorithms that are stable to changes in the training data (replicability and differential privacy).

Contact me at: alkis.kalavasis [at] yale.edu

My amazing collaborators (in roughly chronological order): Dimitris Fotakis, Christos Tzamos, Konstantinos Stavropoulos, Vasilis Kontonis, Manolis Zampetakis, Jason Milionis, Stratis Ioannidis, Eleni Psaroudaki, Grigoris Velegkas, Amin Karbasi, Hossein Esfandiari, Andreas Krause, Vahab Mirrokni, Constantine Caramanis, Shay Moran, Idan Attias, Steve Hanneke, Andreas Galanis, Anthimos Vardis Kandiros, Ioannis Anagnostides, Tuomas Sandholm, Felix Zhou, Kasper Green Larsen, Ilias Zadik, Anay Mehrotra, Argyris Oikonomou, Katerina Sotiraki.

Publications

Preprints
  1. Characterization of Language Generation with Breadth
    with Anay Mehrotra and Grigoris Velegkas
  2. Transfer Learning Beyond Bounded Density Ratios
    with Ilias Zadik and Manolis Zampetakis
Conference Publications

    2025

  1. On the Limits of Language Generation: Trade-Offs Between Hallucination and Mode Collapse
    with Anay Mehrotra and Grigoris Velegkas
    STOC 2025
  2. Computational Lower Bounds for No-Regret Learning in Normal-Form Games
    with Ioannis Anagnostides and Tuomas Sandholm
    STOC 2025
    [PDF1] Barriers to Welfare Maximization with No-Regret Learning
    [PDF2] Computational Lower Bounds for Regret Minimization in Normal-Form Games
  3. 2024

  4. Injecting Undetectable Backdoors in Obfuscated Neural Networks and Language Models
    with Amin Karbasi, Argyris Oikonomou, Katerina Sotiraki, Grigoris Velegkas, Manolis Zampetakis
    NeurIPS 2024
  5. On the Computational Landscape of Replicable Learning
    with Amin Karbasi, Grigoris Velegkas and Felix Zhou
    NeurIPS 2024
  6. On Sampling from Ising Models with Spectral Constraints
    with Andreas Galanis and Anthimos Vardis Kandiros
    RANDOM 2024
  7. Smaller Confidence Intervals From IPW Estimators via Data-Dependent Coarsening
    with Anay Mehrotra and Manolis Zampetakis
    COLT 2024
  8. Universal Rates for Real-Valued Regression: Separations between Cut-Off and Absolute Loss
    with Idan Attias, Steve Hanneke, Amin Karbasi and Grigoris Velegkas
    COLT 2024
  9. Replicable Learning of Large-Margin Halfspaces
    with Amin Karbasi, Kasper Green Larsen, Grigoris Velegkas and Felix Zhou
    ICML 2024 Selected as Spotlight
  10. On the Complexity of Computing Sparse Equilibria and Lower Bounds for No-Regret Learning in Games
    with Ioannis Anagnostides, Tuomas Sandholm and Manolis Zampetakis
    ITCS 2024
  11. Learning Hard-Constrained Models with One Sample
    with Andreas Galanis and Anthimos Vardis Kandiros
    SODA 2024
  12. 2023

  13. Optimizing Solution-Samplers for Combinatorial Problems: The Landscape of Policy-Gradient Methods
    with Constantine Caramanis, Dimitris Fotakis, Vasilis Kontonis and Christos Tzamos
    NeurIPS 2023 Selected as Oral
    [Code]
  14. Optimal Learners for Realizable Regression: PAC Learning and Online Learning
    with Idan Attias, Steve Hanneke, Amin Karbasi and Grigoris Velegkas
    NeurIPS 2023 Selected as Oral
  15. Statistical Indistinguishability of Learning Algorithms
    with Amin Karbasi, Shay Moran and Grigoris Velegkas
    ICML 2023
  16. Replicable Bandits
    with Hossein Esfandiari, Amin Karbasi, Andreas Krause, Vahab Mirrokni and Grigoris Velegkas
    ICLR 2023
  17. 2022

  18. Multiclass Learnability Beyond the PAC Framework: Universal Rates and Partial Concept Classes
    with Grigoris Velegkas and Amin Karbasi
    NeurIPS 2022
  19. Learning and Covering Sums of Independent Random Variables with Unbounded Support
    with Konstantinos Stavropoulos and Manolis Zampetakis
    NeurIPS 2022 Selected as Oral
  20. Perfect Sampling from Pairwise Comparisons
    with Dimitris Fotakis and Christos Tzamos
    NeurIPS 2022
  21. Linear Label Ranking with Bounded Noise
    with Dimitris Fotakis, Vasilis Kontonis and Christos Tzamos
    NeurIPS 2022 Selected as Oral
  22. Label Ranking through Nonparametric Regression
    with Dimitris Fotakis and Eleni Psaroudaki
    ICML 2022 Selected for Long Presentation
  23. Differentially Private Regression with Unbounded Covariates
    with Jason Milionis, Dimitris Fotakis and Stratis Ioannidis
    AISTATS 2022
  24. 2021

  25. Efficient Algorithms for Learning from Coarse Labels
    with Dimitris Fotakis, Vasilis Kontonis and Christos Tzamos
    COLT 2021
  26. Aggregating Incomplete and Noisy Rankings
    with Dimitris Fotakis and Konstantinos Stavropoulos
    AISTATS 2021
  27. 2020

  28. Efficient Parameter Estimation of Truncated Boolean Product Distributions
    with Dimitris Fotakis and Christos Tzamos
    COLT 2020
    Algorithmica 2022

Teaching

Stability in Machine Learning: Generalization, Privacy & Replicability

  • Lecture 1 (VC Theory and Uniform Convergence) (Jan 14, 2025) [PDF]
  • Lecture 2 (Generalization Bounds via Algorithmic Stability) (Jan 21, 2025) [PDF]
  • Lecture 3 (Stability of SGD and Randomization Tests) (Jan 28, 2025) [PDF]
  • Lecture 4 (Uniform Convergence Failures and Domain Adaptation) (Feb 4, 2025) [PDF]
  • Lecture 5 (Online and Private PAC Learning) (Feb 11, 2025) [PDF]

  • Plain Academic