I am an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) at the University of Texas at Austin (UT Austin). I am also a core member of the Machine Learning Laboratory (MLL) and a member of the Wireless Networking & Communications Group (WNCG).
My current research focuses on the theory and applications of convex and non-convex optimization in large-scale machine learning and data science problems. In particular,
If you are interested in working with me: Please apply to the ECE graduate program and mention my name in your application. If you are already at UT Austin, please send me an email and we can arrange a time to meet.
ECE Machine Learning Seminars: This academic year I am organizing ML seminar series at the ECE department sponsored by UT Austin Foundations of Data Science (an NSF Tripods Institute). You can find more information about our ML seminars here.
Selected Recent News
April 2021: Check out my talk on “Towards Communication-Efficient Personalized Federated Learning via Representation Learning and Meta-Learning” in the NSF Workshop on Communication Efficient Distributed Optimization. Slides are available here.
March 2021: Check out my talk on “Exploiting Fast Local Convergence of Second-Order Methods Globally: Adaptive Sample Size Methods” in the Beyond First-Order Methods in Machine Learning Mini-symposium at the SIAM Conference on Computational Science and Engineering (CSE21). Slides are available here.
January 2021: Our paper “Federated Learning with Compression: Unified Analysis and Sharp Guarantees” is accepted to AISTATS 2021.
November 2020: Check out our survey on “Stochastic Quasi-Newton Methods” published in the Proceedings of the IEEE.
September 2020: Our following papers are accepted to NeurIPS 2020
– Task-Robust Model-Agnostic Meta-Learning
– Second Order Optimality in Decentralized Non-Convex Optimization via Perturbed Gradient Tracking
– Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach
– Submodular Meta-Learning
September 2020: Our paper “Stochastic Conditional Gradient++: (Non-)Convex Minimization and Continuous Submodular Maximization” is accepted for publication in the SIAM Journal on Optimization (SIOPT).
September 2020: Our paper “Convergence Rate of O(1/k) for Optimistic Gradient and Extra-Gradient Methods in Smooth Convex-Concave Saddle Point Problems” is accepted for publication in the SIAM Journal on Optimization (SIOPT).
***(For the full list of news please check the News tab.)