Saberlve - Overview

Embodied AI banner

Hi, I'm Shuxun Wang

Typing animation

Incoming M.Eng. @ Zhejiang University (Computer Science)
Research focus: Embodied Intelligence - Vision-Language-Action (VLA)

Zhejiang University logo

Affiliation
ZJU CS (M.Eng.)
HIT B.Eng.
Focus
Embodied AI
VLM / VLA
Intent
Perception -> Reasoning -> Action
Safe, data-efficient autonomy

About me

I'm a research-oriented engineer and incoming graduate student at Zhejiang University (ZJU) CS. My work and interests center around building generalist robot learners that can perceive, reason, and act from multimodal inputs - bridging vision, language, and action for real-world autonomy.

  • Interests: Embodied AI, Vision-Language(-Action) Models, Interactive Reasoning, Policy Learning, Robot Manipulation
  • Methods: Multimodal representation learning, RL/IL/BC, diffusion policies, model-based planning, long-horizon task decomposition
  • Goals: Align foundation models with physical agency; make robots reliable, data-efficient, and safe in open-world settings

Currently I'm learning and pivoting into VLA/Embodied AI. Previously, I worked on NLP and Large Language Models.

Research directions I care about

  • Embodied Intelligence: grounding language in action and sensing for robust real-world autonomy
  • VLA Systems: unified perception-reasoning-action stacks for long-horizon tasks
  • Robot Learning: data-efficient policies via imitation, reinforcement, and diffusion
  • Interactive Reasoning: tool-augmented agents and multi-step planning

Education

  • Zhejiang University, College of Computer Science and Technology - M.Eng. (incoming)
  • Harbin Institute of Technology - B.Eng.

Previously: NLP & LLM (brief)

  • Large language models: instruction tuning and alignment (e.g., SFT/LoRA; familiarity with RLHF/DPO paradigms), safety and robustness evaluation
  • Prompting and reasoning: task-specific prompting, tool-augmented workflows, multi-step reasoning evaluation
  • Data and training: dataset curation/cleaning, tokenization pipelines, experiment tracking; experience with distributed training frameworks
  • Inference optimization: lightweight fine-tuning, quantization/distillation considerations for deployment

Get in touch

I'm open to research collaborations and discussion. If you're working on Embodied AI, VLA, or robotics platforms, I'd love to connect.

Gmail GitHub


Thanks for visiting - if anything here resonates, feel free to star relevant repos or drop me a message. Let's build reliable embodied agents together.