Yiming Dou
Humans perceive the world with multiple senses, based on which we establish abstract concepts to understand it. From the concepts we develop logical reasoning ability, and thus creating brilliant achievements. Inspired by this, my dream is to design human-like multisensory intelligent systems, which can be divided into four specific problems:
(* indicates equal contribution)
|
Cross-Sensor Touch Generation
Samanta Rodriguez*, Yiming Dou*, Miquel Oller, Andrew Owens, Nima Fazeli CoRL 2025 (Oral) paper 路 project page We learn to translate touch signals captured from one touch sensor to another, which allows us to transfer object manipulation policies between sensors. |
|
|
Hearing Hands: Generating Sounds from Physical Interactions in 3D Scenes
Yiming Dou, Wonseok Oh, Yuqing Luo, Antonio Loquercio, Andrew Owens CVPR 2025 paper 路 project page 路 code We make 3D scene reconstruction interactive by predicting the sounds of human hands physically interacting with the scene. |
|
|
Tactile-Augmented Radiance Fields
Yiming Dou, Fengyu Yang, Yi Liu, Antonio Loquercio, Andrew Owens CVPR 2024 paper 路 project page 路 code We present a visuo-tactile 3D scene representation that can estimate the visual and tactile signals for a given 3D position within the scene. |
|
|
The ObjectFolder Benchmark: Multisensory Learning with Neural and Real
Objects
Ruohan Gao*, Yiming Dou*, Hao Li*, Tanmay Agarwal, Jeannette Bohg, Yunzhu Li, Li Fei-Fei, Jiajun Wu CVPR 2023 paper 路 project page 路 code 路 interactive demo 路 video We introduce a benchmark suite for multisensory object-centric learning with sight, sound, and touch. We also introduce a dataset including the multisensory measurements for real-world objects |
As a person working on building multisensory systems, I also enjoy being a multisensory embodied agent outside of work: