ARYAN MOKHTARI
I am an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Texas at Austin (UT Austin). I am also a core member of the Machine Learning Laboratory and a member of WNCG (Wireless Networking & Communications Group).
Before joining UT Austin, I was a Postdoctoral Associate in the Laboratory for Information and Decision Systems (LIDS) at MIT. Prior to that, I was a Research Fellow at the Simons Institute for the Theory of Computing at UC Berkeley for the program on Bridging Continuous and Discrete Optimization. I obtained my Ph.D. in Electrical and Systems Engineering from the University of Pennsylvania. For a slightly more formal third-person bio please check the Bio tab.
My current research focuses on the theory and applications of convex and non-convex optimization in large-scale machine learning and data science problems.
Here are links to my Google Scholar profile, Twitter account, and CV.
If you are interested in working with me: Please apply to the ECE graduate program and mention my name in your application. If you are already at UT Austin, please send me an email and we can arrange a time to meet.
ECE Machine Learning Seminars: This academic year I am organizing ML seminar series at the ECE department sponsored by UT Austin Foundations of Data Science (an NSF Tripods Institute). You can find more information about our ML seminars here.
Selected Recent News
November 2020: Check out our survey on “Stochastic Quasi-Newton Methods” published in the Proceedings of the IEEE.
September 2020: Our following papers are accepted to NeurIPS 2020
– Task-Robust Model-Agnostic Meta-Learning
– Second Order Optimality in Decentralized Non-Convex Optimization via Perturbed Gradient Tracking
– Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach
– Submodular Meta-Learning
September 2020: Our paper “Stochastic Conditional Gradient++: (Non-)Convex Minimization and Continuous Submodular Maximization” is accepted for publication in the SIAM Journal on Optimization (SIOPT).
September 2020: Our paper “Convergence Rate of O(1/k) for Optimistic Gradient and Extra-Gradient Methods in Smooth Convex-Concave Saddle Point Problems” is accepted for publication in the SIAM Journal on Optimization (SIOPT).
July 2020: NSF Award “CIF: Small: Computationally Efficient Second-Order Optimization Algorithms for Large-Scale Learning”
June 2020: Our paper “Quantized Decentralized Stochastic Learning over Directed Graphs” is accepted to ICML 2020.
May 2020: Our paper “Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization” is accepted for publication in the Journal of Machine Learning Research (JMLR).
February 2020: Invited talk on “Communication-Efficient Federated Learning with Periodic Averaging and Quantization” at ITA 2020.
January 2020: Our following papers are accepted to AISTATS 2020
– “On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms”
– “FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization”
– “A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach”
– “One Sample Stochastic Frank-Wolfe”
– “Quantized Frank-Wolfe: Faster Optimization, Lower Communication, and Projection Free”
– “Efficient Distributed Hessian Free Algorithm for Large-scale Empirical Risk Minimization via Accumulating Sample Strategy”
– “DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate”