Biography
Dr. Xianyuan Zhan is a research associate professor at the Institute for AI Industry Research (AIR), Tsinghua University. He received a dual Master’s degree in Computer Science and Transportation Engineering, and a PhD degree in Transportation Engineering from Purdue University. Before joining AIR, Dr. Zhan was a data scientist at JD Technology and also a researcher at Microsoft Research Asia (MSRA). Dr. Zhan previously led the research and development of AI-driven industrial system optimization products at JD Technology. He has published more than 80 papers in key journals and conferences in the field of Transportation Engineering and Computer Science. He is also a reviewer for many top transportation and computer science journals and conferences. He is currently a committee member of China Computer Federation-Artificial Intelligence & Pattern Recognition (CCF-AI) Committee.
- Group Website: https://air-dream.netlify.app/
- Group Code Repository: https://github.com/THU-AIR-DREAM
Research Interests
- Real-world reinforcement learning / imitation learning
- Embodied AI
- Autonomous driving
- Complex industrial system optimization
We are hiring!!!
Our team is looking for student interns/postdocs at AIR! If you are interested in the research directions of real-world RL/IL, embodied AI, autonomous driving, and AI alignment/AI safety, please feel free to send me an E-mail at zhanxianyuan@air.tsinghua.edu.cn!
Recent News and Activities
- Jan. 2026: Our four recent papers “X-VLA: Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model”, “Dichotomous Diffusion Policy Optimization”, “Sample Efficient Offline RL via T-Symmetry Enforced Latent State-Stitching”, and “Discrete Diffusion for Reflective Vision-Language-Action Models in Autonomous Driving” have been accepted in ICLR 2026!
- Oct. 2025: Our X-VLA has won First Place in the AGIBOT World Challenge (Manipulation track) @ IROS 2025!
- Oct. 2025: We have released “X-VLA: Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model”! A lightweight yet scalable cross-embodiment VLA model that achieve SOTA performance over all mainstream embodied AI benchmarks! Project page available at https://thu-air-dream.github.io/X-VLA/.
- Sep. 2025: Our three recent papers “Flow Matching-Based Autonomous Driving Planning with Advanced Interactive Behavior Modeling”, “Towards Robust Zero-Shot Reinforcement Learning”, and “Uni-RL: Unifying Online and Offline RL via Implicit Value Regularization” have been accepted in NeurIPS 2025!
- May. 2025: Our recent paper “Efficient Robotic Policy Learning via Latent Space Backward Planning” has been accepted in ICML 2025!
- Feb. 2025: Our recent paper “Universal Actions for Enhanced Embodied Foundation Models” has been accepted in CVPR 2025! It enables learning universal actions to power any robotic embodiment, physical meaning, and control interfaces! Project page available at https://2toinf.github.io/UniAct/.
- Jan. 2025: Our two recent papers “Robo-MUTUAL: Robotic Multimodal Task Specification via Unimodal Learning” and “H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps” have been accepted in ICRA 2025!
- Jan. 2025: Our three recent papers “Data Center Cooling System Optimization Using Offline Reinforcement Learning”, “Diffusion-Based Planning for Autonomous Driving with Flexible Guidance”, and “Skill Expansion and Composition in Parameter Space” have been accepted in ICLR 2025!
