I am an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Texas at Austin (UT Austin). I am also a member of UT-MINDS (Machine INtelligence & Decision Systems) and WNCG (Wireless Networking & Communications Group).
Before joining UT Austin, I was a Postdoctoral Associate in the Laboratory for Information and Decision Systems (LIDS) at MIT. Prior to that, I was a Research Fellow at the Simons Institute for the Theory of Computing at UC Berkeley for the program on Bridging Continuous and Discrete Optimization. I obtained my Ph.D. in Electrical and Systems Engineering from the University of Pennsylvania. For a slightly more formal third-person bio please check the Bio tab.
My current research focuses on the theory and applications of convex and non-convex optimization in large-scale machine learning and data science problems.
If you are interested in working with me: Please apply to the ECE graduate program and mention my name in your application. If you are already at UT Austin, please send me an email and we can arrange a time to meet.
ECE Machine Learning Seminars: This academic year I am organizing ML seminar series at the ECE department sponsored by UT Austin Foundations of Data Science (an NSF Tripods Institute). You can find more information about our ML seminars here.
- February 2020: New paper out: “Distribution-Agnostic Model-Agnostic Meta-Learning”
- February 2020: New paper out: “Personalized Federated Learning: A Meta-Learning Approach”
- February 2020: New paper out: “Provably Convergent Policy Gradient Methods for Model-Agnostic Meta-Reinforcement Learning”
- February 2020: Delivered an invited talk on “Communication-Efficient Federated Learning with Periodic Averaging and Quantization” at ITA 2020.
- January 2020: Seven papers accepted to AISTATS 2020
– “On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms”
– “FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization”
– “A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach”
– “One Sample Stochastic Frank-Wolfe”
– “Quantized Frank-Wolfe: Faster Optimization, Lower Communication, and Projection Free”
– “Efficient Distributed Hessian Free Algorithm for Large-scale Empirical Risk Minimization via Accumulating Sample Strategy”
– “DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate”
- December 2019: Delivered an invited talk on “Understanding the Role of Optimism in Minimax Optimization” in the “Bridging Game Theory and Deep Learning” Workshop at NeurIPS 2019. [Slides]
- October 2019: Attending INFORMS Annual Meeting to
– Chair the session on “Min-Max Optimization”
– Deliver an invited talk in the session on “Large Scale and Distributed Optimization”
- October 2019: Our paper “A Primal-Dual Quasi-Newton Method for Exact Consensus Optimization” is accepted for publication in Trans. on Signal Processing.
- September 2019: Our following papers are accepted to NeurIPS 2019
– “Robust and Communication-Efficient Collaborative Learning”
– “Stochastic Continuous Greedy++: When Upper and Lower Bounds Match”
(For the full list of news please check the News tab.)