• Skip to primary navigation
  • Skip to main content
  • Skip to footer
UT Shield
The University of Texas at Austin
  • Publications
  • Teaching
  • Bio
  • Research Group
  • Home

Sujay Sanghavi


  • Bettie Margaret Smith Professor of Engineering, ECE
  • Director, NSF TRIPODS Institute at UT Austin
  • Principal Research Scientist and Amazon Scholar, Amazon
  • Associate Director, Amazon Science Hub at UT Austin
  • Scientific Board, Center for Generative AI
  • Core member NSF AI Institute for the Foundations of Machine Learning
  • Member, Wireless Networking and Communications Group


Email: sanghavi@mail.utexas.edu , Office: EER 6.824


My research interests are in machine learning, with a current focus on understanding and improving the architecture and training of large-scale models for language and representation learning. I am also broadly interested in understanding machine learning from a theoretical perspective.

Over the past decade, I have also spent time in industry; I have been a Visiting Scientist at Google Research, and a senior quant and founding member of an algorithmic trading team at the hedge fund Engineers Gate. Since 2019 I have been a Principal Research Scientist, and then an Amazon Scholar, working in Amazon’s Search org.


News

  • (May 2025) ICML 2025 Papers:
    • Upweighting Easy Samples in Fine-Tuning Mitigates Forgetting
    • Learning Mixtures of Experts with EM: A Mirror Descent Perspective
    • Retraining with Predicted Hard Labels Provably Increases Model Accuracy
    • Geometric Median Matching for Robust k-Subset Selection from Noisy Data
  • (May 2025) Congratulations to Rudrajit Das and Anish Acharya for graduating with PhDs ! Rudrajit is headed to Google Research and Anish to AWS (Amazon)
  • (May 2025) Congratulations to Atula Tejaswi and Vijay Lingam for completing their MS degrees ! Atula is continuing at UT for a PhD, while Vijay heads to Amazon Q.
  • (Mar 2025) Invited talk in Workshop on Theoretical Perspectives on LLMs [slides]
  • (Jan 2025) ICLR 2025 Papers:
    • Enhancing Language Model Agents using Diversity of Thoughts [project page]
    • Infilling Score: A Pretraining Data Detection Algorithm for Large Language Models

Footer

FOOTER SECTION ONE

FOOTER SECTION TWO

FOOTER SECTION THREE

  • Email
  • Facebook
  • Instagram
  • Twitter

UT Home | Emergency Information | Site Policies | Web Accessibility | Web Privacy | Adobe Reader

© The University of Texas at Austin 2025