Biography
Dr. Xianyuan Zhan is a research assistant professor at the Institute for AI Industry Research (AIR), Tsinghua University. He received a dual Master’s degree in Computer Science and Transportation Engineering, and a PhD degree in Transportation Engineering from Purdue University. Before joining AIR, Dr. Zhan was a data scientist at JD Technology and also a researcher at Microsoft Research Asia (MSRA). Dr. Zhan previously led the research and development of AI-driven industrial system optimization products at JD Technology. He has published more than 40 papers in key journals and conferences in the field of Transportation Engineering and Computer Science. He is also a reviewer for many top transportation and computer science journals and conferences. He is currently a committee member of China Computer Federation-Artificial Intelligence & Pattern Recognition (CCF-AI) Committee.
Research Interests
- Offline deep reinforcement learning
- Complex system optimization
- Urban computing
- Big data analytics in transportation
- Complex networks
We are hiring!!!
Our team is looking for student interns/engineers/research assistants of different levels at AIR! If you are interested in the research directions of offline reinforcement learning or urban computing/data-driven transportation modeling, please feel free to send me an e-mail at zhanxianyuan@air.tsinghua.edu.cn!
Recent News and Activities
- May. 2022: Our paper: “Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations” has been accepted in ICML 2022.
- Apr. 2022: Our paper: “Model-Based Offline Planning with Trajectory Pruning” has been accepted in IJCAI-ECAI 22.
- Jan. 2022: Our recent paper: “CSCAD: Correlation Structure-based Collective Anomaly Detection in Complex System” has been accepted in IEEE Transactions on Knowledge and Data Engineering (TKDE).
- Dec. 2021: Our two papers “DeepThermal: Combustion Optimization for Thermal Power Generating Units Using Offline Reinforcement Learning” and “Constraints Penalized Q-Learning for Safe Offline Reinforcement Learning” have been accepted in AAAI 2022.
- Oct. 2021: Our two recent papers “Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations” and “Model-Based Offline Planning with Trajectory Pruning” have been accepted in NeurIPS Deep RL Workshop and NeurIPS Offline RL Workshop.
- Aug. 2021: Interview at “TalkRL: The Reinforcement Learning Podcast” with Robin Chauhan is now available online. Covering our work of DeepThermal. Apple podcast link is also available here.
- May. 2021: Our latest paper: “Network-Wide Traffic States Imputation Using Self-interested Coalitional Learning” has been accepted in KDD 2021.