• Skip to primary navigation
  • Skip to main content
UT Shield
The University of Texas at Austin
  • Home
  • Group
  • Publications
  • Teaching
  • Bio
  • Contact

Aryan Mokhtari

 

  • Assistant Professor, ECE., UT Austin (Jack Kilby/Texas Instruments Fellow)
  • Visiting Faculty Researcher, Google Research
  • Core Member, NSF AI Inst. for Foundations of Machine Learning (IFML)
  • Core Member, NSF AI Inst. for Future Edge Networks & Distributed Intelligence (AI-EDGE)
  • Google Scholar / CV 

 

 


Research Interests:

  • Optimization Theory and Algorithms:
    • Convex and Nonconvex Optimization, Minimax Optimization, Bilevel Optimization, Parameter-free and Adaptive Optimization
  • Theoretical Foundations of ML and AI:
    • Representation Learning, Multi-task Learning, In-Context Learning, Machine Unlearning, Efficient Fine-Tuning

News

    • May ’24: Paper at COLT 2025:
      • Provable Complexity Improvement of AdaGrad over SGD: Upper and Lower Bounds in Stochastic Non-Convex Optimization [pdf]
    • May ’24: Paper at ICML 2025:
      • Learning Mixtures of Experts with EM [pdf]
    • Feb. ’25: Promoted to tenured Associate Professor at UT Austin ECE, effective August 2025.
    • Jan. ’25: Paper at STOC 2025:
        • Improved Complexity for Smooth Nonconvex Optimization: A Two-Level Online Learning Approach with Quasi-Newton Methods [pdf]
    • Jan. ’25: Paper at ICLR 2025:
        • On the Crucial Role of Initialization for Matrix Factorization [pdf]
    • Jan. ’25: Joined Google Research as a Visiting Faculty Researcher.
    •  Dec. ’24: Plenary talk at Optimization for ML Workshop at NeurIPS. Slides are available here.
    • Sep. ’24: Papers at NeurIPS 2024:
      • Non-asymptotic Global Convergence Analysis of BFGS with the Armijo-Wolfe Line Search [pdf] (Spotlight)
      • In-Context Learning with Transformers: Softmax Attention Adapts to Function Lipschitzness [pdf] (Spotlight)
      • Adaptive and Optimal Second-order Optimistic Methods for Minimax Optimization [pdf]
      • An Accelerated Gradient Method for Simple Bilevel Optimization with Convex Lower-level Problem [pdf]
      • Stochastic Newton Proximal Extragradient Method [pdf]
    •  July ’24: Qiujiang (joining Goldman Sachs), Isidoros (joining Meta), Liam (joining Snap Research), and Devyani (joining Goldman Sachs) have graduated! Wishing them all the best as they embark on their new journeys!
    •  Apr. ’24: Google Research Scholar Award. (Announcement)
    • Jan. ’24: NSF CAREER Award. (Announcement)

  • Support

    My research has been supported by an NSF CAREER Award CCF-2338846, NSF Grants CCF-2007668 and ECCS-2127697, an ARO Early Career Program (ECP) Award, a Google Research Scholar Award, the NSF AI Institute for Foundations of Machine Learning (IFML), the NSF AI Institute for Future Edge Networks and Distributed Intelligence (AI-EDGE), and the Machine Learning Laboratory at UT Austin.

     

UT Home | Emergency Information | Site Policies | Web Accessibility | Web Privacy | Adobe Reader

© The University of Texas at Austin 2025