Zhengqi Li

I am a research scientist at Google DeepMind. Previously, I was a research scientist at Google Research and Adobe Research. I earned my Ph.D. in Computer Science from Cornell University, where I was advised by Prof. Noah Snavely. My work has been recognized with several honors, including the 2020 Google Ph.D. Fellowship, the 2020 Adobe Research Fellowship, the 2021 Baidu Global Top 100 Rising Stars in AI, Best Paper Honorable Mention Awards at CVPR 2019, CVPR 2023, and CVPR 2025, the ICCV 2023 Best Student Paper Award, and the CVPR 2024 Best Paper Award.

  • Resume/CV
  • News

  • Three paper was accepted to CVPR, 2025. MegaSaM was selected as Best Paper Honorable Mention
  • I will serve as an Area Chair for CVPR 2025 and ICCV 2025.
  • One paper was accepted to SIGGRAPH, 2024.
  • Generative Image Dynamics was accepted to CVPR, 2024 and selected as Best Paper Award.
  • OmniMotion was accepted to ICCV, 2023 and selected as Best Student Paper Award .
  • Three paper was accepted to CVPR, 2023. DynIBaR was selected as Best Paper Honorable Mention
  • I will serve as an Area Chair for CVPR 2024.
  • Two paper was accepted to ECCV, 2022. One of them was selected as Oral Presentation
  • One paper was accepted to SIGGRAPH, 2022.
  • Three paper were accepted to CVPR, 2022. Two of them were selected as Oral Presentation
  • Our paper on dynamic scenes space-time view synthesis was accepted to CVPR, 2021.
  • I received 2020 Google PhD Fellowship. Thanks Google!
  • Our paper on Crowdsampling the Plenoptic Function was accepted to ECCV, 2020 and was selected as Oral Presentation.
  • Our extended paper on learning depth of moving people was accepted to TPAMI.
  • I received 2020 Adobe Research Fellowship (10 selected worldwide). Thanks Adobe!
  • Our paper on geometry aware camera orientation estimation was accepted to ICCV, 2019.
  • Our paper on learning the depth of dynamic scene with moving people was selected as CVPR 2019 Best Paper Honorable Mention
  • Our paper on learning intrinsic image decompostion from CGIntrinsics was accepted to ECCV, 2018.
  • Our paper on unsupervised learning intrinsic image decomposition was accepted to CVPR, 2018 and was selcted for a spolight oral.
  • Our paper on learning single-view depth prediction from Internet photos was accepted to CVPR, 2018
  • Publications/Technical Reports

    Media Coverage

  • Two Minute Papers: Google’s New AI: Fly INTO Photos… But Deeper!
  • Two Minute Papers: This AI Learned To Stop Time!
  • Cornell Chronicle Tool transforms world landmark photos into 4D experiences
  • Google AI Blog: Moving Camera, Moving People: A Deep Learning Approach to Depth Prediction
  • MIT Technology Review: If you did the Mannequin Challenge, you are now advancing robotics research
  • Two Minute Papers: This AI Learns About Movement By Watching Frozen People
  • VentureBeat: Google Leverages on YouTube Videos of Mannequin Challenge to Improve Depth Prediction
  • Seamless: Google、シーン内の人とカメラの両方が移動していても1台の単眼カメラから深度を予測するdeep learningを用いた手法を発表
  • 量子位: 谷歌预测景深新研究:即使相机人物都在动,单一视点也能合成3D深度图
  • Robinly: CVPR2019爆款论文作者现场解读:视觉语言导航、运动视频深度预测、6D姿态估计
  • 雷锋网: 单个运动摄像头估计运动物体深度,谷歌挑战新难题
  • 雷锋网: 通过网络无标注延时摄影学习本征图像分解