PhysiAgent: An Embodied Agent Framework in Physical World
Published in ICML 2025 Workshop on New in ML., 2025
Recommended citation: Wang, Z., Li, J., Zheng, J., Zhang, W., Liu, D., Zheng, Y., Niu, H., Yu, J., Zhan, X. PhysiAgent: An Embodied Agent Framework in Physical World. ICML 2025 Workshop on New in ML.
Abstract
Vision-Language-Action (VLA) models have achieved notable success but oftenstruggle with limited generalizations. To address this, integrating generalized Vision-Language Models (VLMs) as assistants to VLAs has emerged as a popular solution. However, current approaches often combine these models in rigid, se-quential structures: using VLMs primarily for high-level scene understanding and task planning, and VLAs merely as executors of lower-level actions, leading to in-effective collaboration and poor grounding challenges. In this paper, we propose an embodied agent framework, PhysiAgent, tailored to operate effectively in physical environments. By incorporating monitor, memory, self-reflection mechanisms, and lightweight off-the-shelf toolboxes, PhysiAgent offers an autonomous scaffolding framework to prompt VLMs to organize different components based on real-time proficiency feedback from VLAs to maximally exploit VLAs’ capabilities. Experimental results demonstrate significant improvements in task-solving performance on complex real-world robotic tasks, showcasing effective self-regulation of VLMs, coherent tool collaboration, and adaptive evolution of the framework during execution. PhysiAgent makes practical and pioneering efforts to integrate VLMs and VLAs, effectively grounding embodied agent frameworks in real-world settings.