square_smiling_slightly_turned_mainI am a Research Lead at Meta TBD Labs. Before that, I was a research lead at OpenAI, and a core contributor to o1, o3, gpt-4.1, gpt-4.5, and gpt5. I did my PhD at Stanford University advised by Percy Liang and Tengyu Ma. Here is my CV.

Model Releases

o3 (livestream and blogpost)

o1 (core contributor to the RL algorithms)

GPT-4.1 (research co-lead)

GPT-4.5 (core contributor to pretraining evals)

Selected Publications

(See all publications at this link)

Learning to Reason with LLMs. OpenAI 2024. (core contributor to the RL algorithms)

Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution. Ananya Kumar, Aditi Raghunathan, Robbie Jones, Tengyu Ma, Percy Liang. International Conference on Learning Representations (ICLR Oral) 2022. 1.6% oral acceptance rate. [Slides]

Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation. Kendrick Shen*, Robbie Jones*, Ananya Kumar*, Sang Michael Xie*, Jeff Z. HaoChen, Tengyu Ma, Percy Liang. International Conference on Machine Learning (ICML Long Talk) 2022. 2.1long talk acceptance rate. [Slides]

Understanding Self-Training for Gradual Domain AdaptationAnanya Kumar, Tengyu Ma, Percy Liang. International Conference on Machine Learning (ICML) 2020.

Verified Uncertainty CalibrationAnanya Kumar, Percy Liang, Tengyu Ma. Neural Information Processing Systems (NeurIPS Spotlight) 2019. 3.0% spotlight / oral acceptance rate.

Students Advised

I have been lucky to co-advise a number of talented undergraduate and master’s students at Stanford, who have written some very insightful papers:

I have also mentored or proposed research directions for a number of fantastic PhD students who have taught me a lot: