About me

I am a permanent researcher at IRIF, affiliated with CNRS and Université Paris Cité.

I received my PhD from MIT Math in 2017, which was followed by a postdoc at Boston University.

I work on multiple aspects of convex and non-convex optimization. My work so far combined techniques from continuous optimization and convex geometry in order to obtain improved algorithms for classical discrete problems.

Presently, I am extremely interested in the science of deep learning, and I seek to use principled matematical tools to understand the power and limitations of modern ML models.

Interested in working with me? Please apply to our PhD program. Also consider the Parisian Master of Research in Computer Science, an elite research program after which students typically transition to a PhD.

Publications

Interior Point Methods with a Gradient Oracle
Adrian Vladu
ACM Symposium on Theory of Computing (STOC 2023)

Quantized Distributed Training of Large Models with Convergence Guarantees
Ilia Markov, Adrian Vladu, Qi Guo, Dan Alistarh
International Conference on Machine Learning (ICML 2023)

CrAM: A Compression-Aware Minimizer
Alexandra Peste, Adrian Vladu, Eldar Kurtic, Christoph H. Lampert, Dan Alistarh
International Conference on Learning Representations (ICLR 2023)

Discrepancy Minimization via Regularization
Lucas Pesenti, Adrian Vladu
ACM-SIAM Symposium on Discrete Algorithms (SODA 2023)

Faster Sparse Minimum Cost Flow by Electrical Flow Localization
Kyriakos Axiotis, Aleksander Mądry, Adrian Vladu
Symposium on Foundations of Computer Science (FOCS 2021)

AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks
Alexandra Peste, Eugenia Iofinova, Adrian Vladu, Dan Alistarh
Conference on Neural Information Processing Systems (NeurIPS 2021)

Decomposable Submodular Function Minimization via Maximum Flow
Kyriakos Axiotis, Adam Karczmarz, Anish Mukherjee, Piotr Sankowski, Adrian Vladu
International Conference on Machine Learning (ICML 2021)

Projection-Free Bandit Optimization with Privacy Guarantees
Alina Ene, Huy L. Nguyễn, Adrian Vladu
AAAI Conference on Artificial Intelligence (AAAI 2021)

Adaptive Gradient Methods for Constrained Convex Optimization
Alina Ene, Huy L. Nguyễn, Adrian Vladu
AAAI Conference on Artificial Intelligence (AAAI 2021)

Circulation Control for Faster Minimum Cost Flow in Unit-Capacity Graphs
Kyriakos Axiotis, Aleksander Mądry, Adrian Vladu
Symposium on Foundations of Computer Science (FOCS 2020)

Improved Convergence for ℓ and ℓ1 Regression via Iteratively Reweighted Least Squares
Alina Ene, Adrian Vladu
International Conference on Machine Learning (ICML 2019)
[code]

Submodular Maximization with Matroid and Packing Constraints in Parallel
Alina Ene, Huy L. Nguyễn, Adrian Vladu
ACM SIGACT Symposium on Theory of Computing (STOC 2019)

Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Mądry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu
International Conference on Learning Representations (ICLR 2018)
Oral presentation at the Principled Approaches to Deep Learning workshop, ICML 2017

Matrix Scaling and Balancing via Box Constrained Newton's Method and Interior Point Methods
Michael B. Cohen, Aleksander Mądry, Dimitris Tsipras, Adrian Vladu
Symposium on Foundations of Computer Science (FOCS 2017)

Multidimensional Binary Search for Contextual Decision-Making
Ilan Lobel, Renato Paes Leme, Adrian Vladu
ACM Conference on Economics and Computation (EC 2017)
Invited to the special issue
Appears in Operations Research

Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs
Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford, Adrian Vladu
ACM SIGACT Symposium on Theory of Computing (STOC 2017)
Invited to the special issue
Appears in Highlights of Algorithms 2018

Negative-Weight Shortest Paths and Unit Capacity Minimum Cost Flow in Õ(m10/7 log W) Time
Michael B. Cohen, Aleksander Mądry, Piotr Sankowski, Adrian Vladu
ACM-SIAM Symposium on Discrete Algorithms (SODA 2017)
Appears in Highlights of Algorithms 2017

Tight Bounds for Approximate Carathéodory and Beyond
Vahab S. Mirrokni, Renato Paes Leme, Adrian Vladu, Sam Chiu-wai Wong
International Conference on Machine Learning (ICML 2017)
Oral presentation at the Informs Optimization Society Conference 2016

Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More
Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, Aaron Sidford, Adrian Vladu
Symposium on Foundations of Computer Science (FOCS 2016)

Improved Parallel Algorithms for Spanners and Hopsets
Gary L. Miller, Richard Peng, Adrian Vladu, Shen Chen Xu
ACM symposium on Parallelism in Algorithms and Architectures (SPAA 2015)

How to Elect a Leader Faster than a Tournament
Dan Alistarh, Rati Gelashvili, Adrian Vladu
ACM Symposium on Principles of Distributed Computing (PODC 2015)

Online Ranking for Tournament Graphs
Claire Mathieu, Adrian Vladu
International Workshop on Approximation and Online Algorithms (WAOA 2010)

Other works

A Parallel Double Greedy Algorithm for Submodular Maximization
Alina Ene, Huy L. Nguyễn, Adrian Vladu

Phenotypic profiling reveals that Candida albicans opaque cells represent a metabolically specialized cell state compared to default white cells
(by contribution) Iuliana Ene, Matthew Lohse, Adrian Vladu, Joachim Morschhäuser, Alexander Johnson, Richard Bennett
mBIO (2016)