Dr. Xianyuan Zhan is a research assistant professor at the Institute for AI Industry Research (AIR), Tsinghua University. He received a dual Master’s degree in Computer Science and Transportation Engineering, and a PhD degree in Transportation Engineering from Purdue University. Before joining AIR, Dr. Zhan was a data scientist at JD Technology and also a researcher at Microsoft Research Asia (MSRA). Dr. Zhan previously led the research and development of AI-driven industrial system optimization products at JD Technology. He has published more than 50 papers in key journals and conferences in the field of Transportation Engineering and Computer Science. He is also a reviewer for many top transportation and computer science journals and conferences. He is currently a committee member of China Computer Federation-Artificial Intelligence & Pattern Recognition (CCF-AI) Committee.
- Offline deep reinforcement learning
- Offline imitation learning
- Complex system optimization
- Urban computing
- Big data analytics in transportation
We are hiring!!!
Our team is looking for student interns/postdocs at AIR! If you are interested in the research directions of offline reinforcement learning, offline imitation learning or decision-making in autonomous driving, please feel free to send me an e-mail at firstname.lastname@example.org!
Recent News and Activities
- Jan. 2023: Our three recent papers “Offline RL with No OOD Actions: In-Sample Learning via Implicit Value Regularization”, “When Data Geometry Meets Deep Function: Generalizing Offline Reinforcement Learning” and “Mind the Gap: Offline Policy Optimization for Imperfect Rewards” have been accepted in ICLR 2023!
- Jan. 2023: Our paper “An Efficient Multi-Agent Optimization Approach for Coordinated Massive MIMO Beamforming” on 5G Massive MIMO optimization has been accepted in IEEE ICC 2023.
- Sep. 2022: Our two recent papers “A Policy-Guided Imitation Approach for Offline Reinforcement Learning” and “When to Trust Your Simulator: Dynamics-Aware Hybrid Offline-and-Online Reinforcement Learning” have been accepted in NeurIPS 2022!
- Sep. 2022: Our paper: “Discriminator-Guided Model-Based Offline Imitation Learning” has been accepted in CoRL 2022.
- Jul. 2022: Our paper: “Adversarial Contrastive Learning via Asymmetric InfoNCE” has been accepted in ECCV 2022.
- May. 2022: Our paper: “Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations” has been accepted in ICML 2022.
- Apr. 2022: Our paper: “Model-Based Offline Planning with Trajectory Pruning” has been accepted in IJCAI 22.
- Jan. 2022: Our recent paper: “CSCAD: Correlation Structure-based Collective Anomaly Detection in Complex System” has been accepted in IEEE Transactions on Knowledge and Data Engineering (TKDE).