May 2022: Check out my talk on “The Connections between MAML and Representation Learning” at ITA 2022.
May 2022: Our following papers are accepted to ICML 2022:
May 2022: Our following paper is accepted to COLT 2022:
January. 2022: Our following paper is accepted to AISTATS 2022 for Oral Presentation:
Sept. 2021: Our following papers are accepted to NeurIPS 2021:
Sept. 2021: Honored to be appointed as a Texas Instruments/Kilby Fellow.
Aug. 2021: I’m grateful to the NSF for their generous support of my work. [Award Link]
July 2021: Thrilled to be part of the team for the NSF AI Institute for Future Edge Networks and Distributed Intelligence (AI-EDGE). [News release]
July 2021: Check out my talk on “Optimistic Methods for Minimax Optimization” in the SIAM Conference on Optimization (OP21).
May 2021: Honored to receive the ARO Early Career Program (ECP) Award (previously known as ARO YIP). [News release]
May 2021: Our paper “Exploiting Shared Representations for Personalized Federated Learning” is accepted to ICML 2021.
April 2021: Check out my talk on “Towards Communication-Efficient Personalized Federated Learning via Representation Learning and Meta-Learning” in the NSF Workshop on Communication Efficient Distributed Optimization. [Video]
March 2021: Check out my talk on “Exploiting Fast Local Convergence of Second-Order Methods Globally: Adaptive Sample Size Methods” in the Beyond First-Order Methods in Machine Learning Mini-symposium at the SIAM Conference on Computational Science and Engineering (CSE21). Slides are available here.
February 2021: New paper out: “Exploiting Shared Representations for Personalized Federated Learning”.
February 2021: New paper out: “Generalization of Model-Agnostic Meta-Learning Algorithms: Recurring and Unseen Tasks”.
January 2021: Our paper “Federated Learning with Compression: Unified Analysis and Sharp Guarantees” is accepted to AISTATS 2021.
December 2020: New paper out “Straggler-Resilient Federated Learning: Leveraging the Interplay Between Statistical Accuracy and System Heterogeneity”
November 2020: Check out our survey on “Stochastic Quasi-Newton Methods” published in the Proceedings of the IEEE.
November 2020: Invited talk at 2020 INFORMS Annual Meeting on “Convergence Theory of Gradient-based Model-Agnostic Meta-Learning”
November 2020: I organized a session on “Federated Learning” at 2020 INFORMS Annual Meeting.
October 2020: New paper out “Why Does MAML Outperform ERM? An Optimization Perspective”
October 2020: Our paper “High-Dimensional Nonconvex Stochastic Optimization by Doubly Stochastic Successive Convex Approximation” is accepted for publication in the IEEE Transactions on Signal Processing (TSP).
September 2020: Our following papers are accepted to NeurIPS 2020
– Task-Robust Model-Agnostic Meta-Learning
– Second Order Optimality in Decentralized Non-Convex Optimization via Perturbed Gradient Tracking
– Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach
– Submodular Meta-Learning
September 2020:Our paper “Stochastic Conditional Gradient++: (Non-)Convex Minimization and Continuous Submodular Maximization” is accepted for publication in the SIAM Journal on Optimization (SIOPT).
September 2020: Our paper “Convergence Rate of O(1/k) for Optimistic Gradient and Extra-Gradient Methods in Smooth Convex-Concave Saddle Point Problems” is accepted for publication in the SIAM Journal on Optimization (SIOPT).
July 2020: NSF Award “CIF: Small: Computationally Efficient Second-Order Optimization Algorithms for Large-Scale Learning”
July 2020: New paper out: “Submodular Meta-Learning”
July 2020: New paper out: “Federated Learning with Compression: Unified Analysis and Sharp Guarantees”
June 2020: New paper out: “Safe Learning under Uncertain Objectives and Constraints”
June 2020: Our paper “Quantized Decentralized Stochastic Learning over Directed Graphs” is accepted to ICML 2020.
May 2020: Our paper “Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization” is accepted for publication in the Journal of Machine Learning Research (JMLR).
March 2020: New paper out: “Non-asymptotic Superlinear Convergence of Standard Quasi-Newton Methods”
February 2020: New paper out: “Distribution-Agnostic Model-Agnostic Meta-Learning”
February 2020: New paper out: “Personalized Federated Learning: A Meta-Learning Approach”
February 2020: New paper out: “Provably Convergent Policy Gradient Methods for Model-Agnostic Meta-Reinforcement Learning”
February 2020: Delivered an invited talk on “Communication-Efficient Federated Learning with Periodic Averaging and Quantization” at ITA 2020.
January 2020: Seven papers accepted to AISTATS 2020
– “On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms”
– “FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization”
– “A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach”
– “One Sample Stochastic Frank-Wolfe”
– “Quantized Frank-Wolfe: Faster Optimization, Lower Communication, and Projection Free”
– “Efficient Distributed Hessian Free Algorithm for Large-scale Empirical Risk Minimization via Accumulating Sample Strategy”
– “DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate”
December 2019: Delivered an invited talk in the “Bridging Game Theory and Deep Learning” Workshop at NeurIPS 2019 on “Understanding the Role of Optimism in Minimax Optimization”. [Slides]
December 2019: Attending NeurIPS 2019 to present our following papers:
– “Robust and Communication-Efficient Collaborative Learning”
– “Stochastic Continuous Greedy++: When Upper and Lower Bounds Match”
October 2019: Attending INFORMS Annual Meeting to
– Chair the session on “Min-Max Optimization”
– Deliver an invited talk in the session on “Large Scale and Distributed Optimization”
October 2019: New paper out “One Sample Stochastic Frank-Wolfe”
October 2019: New paper out “FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization”
October 2019: Our paper “A Primal-Dual Quasi-Newton Method for Exact Consensus Optimization” is accepted for publication in Trans. on Signal Processing.
September 2019: Our following papers are accepted to NeurIPS 2019
– “Robust and Communication-Efficient Collaborative Learning”
– “Stochastic Continuous Greedy++: When Upper and Lower Bounds Match”
August 2019: New paper out “On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms”
August 2019: Officially started at UT Austin as an assistant professor.
July 2019: New paper out “Robust and Communication-Efficient Collaborative Learning”
July 2019: Our paper “An Exact Quantized Decentralized Gradient Descent Algorithm” is accepted for publication in Trans. on Signal Processing.
May 2019: New paper out “Proximal Point Approximations Achieving a Convergence Rate of O(1/k) for Smooth Convex-Concave Saddle Point Problems: Optimistic Gradient and Extra-gradient Methods”
February 2019: Delivered the talk “Achieving Acceleration via Direct Discretization of Heavy-Ball Ordinary Differential Equation” at the ITA 2019.
February 2019: New paper out “Stochastic Conditional Gradient++”
February 2019: New paper out “Quantized Frank-Wolfe: Communication-Efficient Distributed Optimization”
January 2019: New paper out “A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach“.
December 22, 2018: Our paper “Efficient Nonconvex Empirical Risk Minimization via Adaptive Sample Size Methods” is accepted to AISTATS 2019.
November 7, 2018: Our paper “A Newton-based Method for Nonconvex Optimization with Fast Evasion of Saddle Points,” is accepted for publication in SIAM Journal on Optimization.
November 6, 2018: Delivered the talk “Escaping saddle points in constrained optimization” at the 2018 INFORMS Annual Meeting.
October 29, 2018: I’m co-chairing two sessions on “Large-scale Optimization” and “Optimization for Machine Learning” at the 2018 INFORMS Annual Meeting. Please stop by the sessions if you are at the conference.
September 5, 2018: I’m honored to receive the Joseph and Rosaline Wolf Award for Best Doctoral Dissertation granted by the Department of Electrical and Systems Engineering of the University of Pennsylvania.
September 4, 2018: Our following papers are accepted for spotlightpresentation at NIPS 2018:
- Direct Runge-Kutta Discretization Achieves Acceleration
- Escaping Saddle Points in Constrained Optimization
September 4, 2018: New paper out “A Primal-Dual Quasi-Newton Method for Exact Consensus Optimization”.
August 14, 2018: Delivered the talk “Achieving Acceleration via Direct Discretization of Heavy-Ball Ordinary Differential Equation” at the DIMACS/TRIPODS workshop on Optimization and Machine Learning.
July 13, 2018: Our following papers are accepted to CDC 2018:
- Quantized Decentralized Consensus Optimization
- A Newton Method for Faster Navigation in Cluttered Environments
July 2, 2018: New paper out “Quantized Decentralized Consensus Optimization”.
June 4, 2018: My Ph.D. dissertation has been selected as the Penn nominee for the 2018 CGS/ProQuest Distinguished Dissertation Award in Mathematics, Physical Sciences, and Engineering.
May 11, 2018: Our following papers are accepted to ICML 2018:
- Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings
- Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication
May 2, 2018: New paper out “Direct Runge-Kutta Discretization Achieves Acceleration”.
April 30, 2018: I will serve on the Technical Program Committee (TPC) of the symposium on “Distributed Learning and Optimization over Networks” for the Global SIP 2018. Please consider submitting your work to this symposium.
April 25, 2018: New paper out “Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization”.
March 23-24, 2018: I’m chairing two sessions on “Sobmodular Optimization” and “Nonconvex Optimization” at the INFORMS Optimization Society Conference. Please stop by the sessions if you are around. You can find the program here.
February 11, 2018: New paper out “Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings”.
February 1, 2018: Our paper “IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate,” is accepted for publication in SIAM Journal on Optimization.
January 29, 2018: Our paper “Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate” is accepted for publication in SIAM Journal on Optimization.
January 29, 2018: Our paper “Parallel Stochastic Successive Non-convex Approximation Method for Large-scale Dictionary Learning” is accepted to ICASSP 2018.
January 1, 2018:Officially started as a Postdoctoral Associate in the Laboratory for Information and Decision Systems (LIDS) at MIT.
December 22, 2017: Our paper “Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap” is accepted to AISTATS 2018.
December 22, 2017: Our paper “Large Scale Empirical Risk Minimization via Truncated Adaptive Newton Method” is accepted to AISTATS 2018.
December 8, 2017: Presented our paper “Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap” at NIPS Workshop on Discrete Structures in Machine Learning (DISCML).
December 4, 2017: Presented our paper “First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization” at NIPS 2017.
November 22, 2017: Our paper entitled “Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap” is accepted for presentation in Discrete Structures in Machine Learning (DISCML) Workshop at NIPS 2017.
November 6, 2017: New paper out “Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap”
October 29, 2017: Alejandro and I have organized a session on “Distributed Methods for Large-Scale Optimization” with four fantastic speakers for Asilomar 2017. Please drop by if you are attending the conference. For more information please check the program.
October 24, 2017: Presented our work entitled “Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate” at the 2017 INFORMS annual meeting. The slides are available here.
September 13, 2017: Hamed and I are organizing a session on “Submodular Maximization” for the 2018 INFORMS Optimization Society Conference.
September 12, 2017: Alejandro, Santiago, and I are organizing a session on “Algorithms for Nonconvex Optimization” for the 2018 INFORMS Optimization Society Conference.
September 4, 2017: Our paper “First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization” is accepted for presentation at the 2017 Conference on Neural Information Processing Systems (NIPS).
September 2, 2017: New paper out “First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization“
August 22, 2017: Presented our work on “Incremental Quasi-Newton Methods with Local Superlinear Convergence Rate” at the DIMACS Workshop on Distributed Optimization, Information Processing, and Learning.
August 16, 2017:Started at Simons Institute for the Theory of Computing as a Research Fellow for the program on “Bridging Continuous and Discrete Optimization“.
July 27, 2017: New paper out “A Second Order Method for Nonconvex Optimization“
July 24, 2017:Successfully defended my Ph.D. thesis entitled “Efficient Methods for Large-Scale Empirical Risk Minimization“.
June 29, 2017: Our poster on “Incremental Quasi-Newton Methods with Local Superlinear Convergence Rate” is accepted for presentation at the DIMACS Workshop on Distributed Optimization, Information Processing, and Learning.
May 24, 2017: Check out our new paper “Large Scale Empirical Risk Minimization via Truncated Adaptive Newton Method“.
April 25, 2017: Our paper “Decentralized Quasi-Newton Methods” has been among the top 50 most frequently accessed documents in IEEE Transactions on Signal Processing for the month of March 2017.
March 28, 2017: Our paper “Decentralized Prediction-Correction Methods for Networked Time-Varying Convex Optimization” is accepted for publication in the IEEE Transactions on Automatic Control.
March 20, 2017: I will serve on the Technical Program Committee (TPC) of the Symposium on “Distributed Optimization and Resource Management over Networks” for the 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP 2017).
March 8, 2017: Presented our recent work on “An Incremental Quasi-Newton Method with a Local Superlinear Convergence Rate” at ICASSP 2017. The slides are available here. You can also find the posters for my other ICASSP papers below.
- A Double Incremental Aggregated Gradient Method with Linear Convergence Rate for Large-Scale Optimization
- A Diagonal-Augmented Quasi-Newton Method with Application to Factorization Machines
- Large-Scale Nonconvex Stochastic Optimization by Doubly Stochastic Successive Convex Approximation
February 26, 2017: Our paper “Stochastic Averaging for Constrained Optimization with Application to Online Resource Allocation” is accepted for publication in the IEEE Transaction on Signal Processing.
February 15, 2017: Presented our recent work on “Incremental Quasi-Newton Methods with Local Superlinear Convergence Rate” at the ITA 2017 Graduation Day. Please find the slides here.
February 7, 2017: Alejandro presented our joint work on “High Order Methods for Empirical Risk Minimization” at IPAM Workshop on Emerging Wireless Networks organized by UCLA. The slides are available here.
February 2, 2017: New paper out “IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate“.
January 31, 2017: I received a Research Fellowship from the Simons Institute for the Theory of Computing at UC Berkeley for the program on “Bridging Continuous and Discrete Optimization” (Fall 2017).
January 28, 2017: Our paper “Decentralized Quasi-Newton Methods” is accepted for publication in IEEE Transaction on Signal Processing.
January 27, 2017: Tech Talk at Google Research, Mountain View, CA. Title: “High Order Methods for Empirical Risk Minimization”.
January 24, 2017: Alejandro and I will organize a session on “Distributed Optimization and Learning” for the 2017 Asilomar Conference on Signals, Systems, and Computers.
January 4, 2017: Nominated by the ESE Department of UPenn to give an oral presentation at the ITA 2017 Graduation Day.
December 25, 2016: Our paper “Network Newton Distributed Optimization Methods” has been among the top 50 most frequently accessed documents in IEEE Transactions on Signal Processing for the month of November 2016.
December 14, 2016: Presented “Online Optimization in Dynamic Environments: Improved Regret Rates for Strongly Convex Problems” at the 55th IEEE Conference on Decision and Control. My other CDC papers “A Decentralized Quasi-Newton Method for Dual Formulations of Consensus Optimization” and “A Decentralized Second-Order Method for Dynamic Optimization” were presented by Mark and Wei, respectively.
December 12, 2016: The following papers are accepted for publication in Proc. of ICASSP 2017.
- An Incremental Quasi-Newton Method with a Local Superlinear Convergence Rate
- A Double Incremental Aggregated Gradient Method with Linear Convergence Rate for Large-Scale Optimization
- Large-Scale NonConvex Stochastic Optimization by Doubly Stochastic Successive Convex Approximation
- A Diagonal-Augmented Quasi-Newton Method with Application to Factorization Machines
December 7-9, 2016: I’m in Washington DC, attending the 2016 Global Conference on Signal and Information Processing (Global SIP). Below you can find the list of my papers at GlobalSIP 2016.
- A Data-driven Approach to Stochastic Network Optimization
- Decentralized Constrained Consensus Optimization with Primal-Dual Splitting Projection
- An Asynchronous Quasi-Newton Method for Consensus Optimization
December 5, 2016: The GAPSA Research Travel Grant Committee awarded me a financial support for my expenses at the 2016 Asilomar Conference on Signals, Systems, and Computers.
November 21, 2016: Ph.D. Proposal, Title: “Effiecient Methods for Large-Scale Optimization”. The slides are available here.
November 16, 2016: Presented “DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers” at the INFORMS Annual Meeting in Nashville.
November 7, 2016: Presented “ESOM: Exact Second-Order Method for Consensus Optimization” at the 50th Asilomar Conference on Signals, Systems, and Computers. Alecpresented our other paper “Doubly Stochastic Algorithms for Large-Scale Optimization“.
November 1, 2016: New paper out “Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate“.
October 7, 2016: New paper out “Stochastic Averaging for Constrained Optimization with Application to Online Resource Allocation“.