I am an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Texas at Austin (UT Austin). I am also a member of UT-MINDS (Machine INtelligence & Decision Systems) and WNCG (Wireless Networking & Communications Group).
Before joining UT Austin, I was a Postdoctoral Associate in the Laboratory for Information and Decision Systems (LIDS) at MIT. Prior to that, I was a Research Fellow at the Simons Institute for the Theory of Computing at UC Berkeley for the program on Bridging Continuous and Discrete Optimization. I obtained my Ph.D. in Electrical and Systems Engineering from the University of Pennsylvania. For a slightly more formal third-person bio please check the Bio tab.
My current research focuses on the theory and applications of convex and non-convex optimization in large-scale machine learning and data science problems.
If you are interested in working with me: Please apply to the ECE graduate program and mention my name in your application. If you are already at UT Austin, please send me an email and we can arrange a time to meet.
ECE Machine Learning Seminars: This academic year I am organizing ML seminar series at the ECE department sponsored by UT Austin Foundations of Data Science (an NSF Tripods Institute). You can find more information about our ML seminars here.
Selected Recent News
- July 2020: NSF Award “CIF: Small: Computationally Efficient Second-Order Optimization Algorithms for Large-Scale Learning”
- June 2020:Our paper “Quantized Push-sum for Gossip and Decentralized Optimization over Directed Graphs” is accepted to ICML 2020.
- May 2020: Our paper “Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization” is accepted for publication in the Journal of Machine Learning Research (JMLR).
- February 2020: Invited talk on “Communication-Efficient Federated Learning with Periodic Averaging and Quantization” at ITA 2020.
- January 2020: Our following papers are accepted to AISTATS 2020
– “On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms”
– “FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization”
– “A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach”
– “One Sample Stochastic Frank-Wolfe”
– “Quantized Frank-Wolfe: Faster Optimization, Lower Communication, and Projection Free”
– “Efficient Distributed Hessian Free Algorithm for Large-scale Empirical Risk Minimization via Accumulating Sample Strategy”
– “DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate”
- December 2019: Invited talk on “Understanding the Role of Optimism in Minimax Optimization” in the “Bridging Game Theory and Deep Learning” Workshop at NeurIPS 2019. [Slides]
- September 2019: Our following papers are accepted to NeurIPS 2019
– “Robust and Communication-Efficient Collaborative Learning”
– “Stochastic Continuous Greedy++: When Upper and Lower Bounds Match”
(For the full list of news please check the News tab.)