Hangxin Liu

I am a research scientist and team lead of the Robotics Lab at Beijing Institute for General Artificial Intelligence. In the robotics lab, my colleagues and I hypothesize that there exists some fundamental representations or cognitive architectures underpinning the intelligent behaviors of humans. By uncovering and reproducing such an architecture, we hope to enable long-term human-robot shared autonomy by bridging

  • Perception (how to extract and organize more expressive symbols from dense sensory signals);
  • Reasoning (how to utilze thoes abstract information for higher-level skills, which in turn faciliates perception);
  • Task and Motion Planning (how to effectively act and react).

I received a Ph.D. in Computer Science and a M.S. in Mechanical Engineering from UCLA, working in the Center for Vision, Cognition, Learning, and Autonomy with Professor Song-Chun Zhu. My work at UCLA was supported by DARPA SIMPLEX, DARPA XAI, ONR MURI, and ONR Cognitive Robot.

Before joining VCLA, I graduated with a B.S. in Mechanical Engineering and a B.S. in Computer Science with a Mathematics minor from Virginia Polytechnic Institute and State University (Virginia Tech) in 2016.

CV  /  Google Scholar

profile photo
Research

I'm interested in robot perception, planning, learning, human-robot interaction, and virtual and augmented reality. Representative papers are highlighted.

Part-Level Scene Reconstruction Affords Robot Interaction
Zeyu Zhang*, Lexing Zhang*, Zaijin Wang, Ziyuan Jiao, Muzhi Han, Yixin Zhu, Song-Chun Zhu, Hangxin Liu
* equal contributors
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023
Paper / Video (YouTube, bilibili) / Project Page / Code

We extend our work in IJCV22 and ICRA21 by segmenting the objects in the panoptic map into parts and replace those parts by primitive shapes, which results in more realistic functionally equivalent scenes.

Learning a Causal Transition Model for Object Cutting
Zeyu Zhang*, Muzhi Han*, Baoxiong Jia, Ziyuan Jiao, Yixin Zhu, Song-Chun Zhu, Hangxin Liu
* equal contributors
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023
Paper / Video (YouTube, bilibili) / Project Page

An attributed stochastic grammar is proposed to model the process of object fragmentation during cutting, which abstracts the spatial arrangement of fragments as node variables and captures the causality of cutting actions based on the fragmentation of parts.

Aggregating Single-wheeled Mobile Robots for Omnidirectional Movements
Meng Wang*, Yao Su*, Hang Li, Jiarui Li, Jixiang Liang, Hangxin Liu
* equal contributors
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023
Paper / Video (YouTube, bilibili) / Project Page

We present a novel modular robot system capable of self-reconfiguration and achieving omnidirectional movements through magnetic docking for collaborative object transportation. Each robot in the system only equips a steerable omni wheel for navigation.

Sequential Manipulation Planning for Over-actuated Uumanned Aerial Manipulators
Yao Su*, Jiarui Li*, Ziyuan Jiao*, Meng Wang, Chi Chu, Hang Li, Yixin Zhu, Hangxin Liu
* equal contributors
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023
(Finalist--IROS Best Paper Award on Mobile Manipulation)
Paper / Video (YouTube, bilibili) / Project Page

Instead of one-step aerial manipulation tasks, we investigate the sequential manipulation planning problem of UAMs, which requires coordinated motions of the vehicle’s floating base, the manipulator, and the object being manipulated over a long horizon.

L3 F-TOUCH: A Wireless GelSight with Decoupled Tactile and Three-axis Force Sensing
Wanlin Li*, Meng Wang*, Jiarui Li, Yao Su✉, Devesh K. Jha, Xinyuan Qian, Kaspar Althoefer, Hangxin Liu
* equal contributors
IEEE Robotics and Automation Letters (RA-L), 2023
Paper / Video (YouTube, bilibili) / Project Page

We present an L3 F-TOUCH sensor that considerably enhances the three-axis force sensing capability of typical GelSight sensors, while being Lightweight, Low-cost, and supporting wireLess deplyment.

A Reconfigurable Data Glove for Reconstructing Physical and Virtual Grasps
Hangxin Liu*✉, Zeyu Zhang*, Ziyuan Jiao*, Zhenliang Zhang, Minchen Li, Chenfanfu Jiang, Yixin Zhu, Song-Chun Zhu
* equal contributors
Engineering, 2023
Paper / Video (YouTube, bilibili) / Project Page

To endow embodied AI agents with a deeper understanding of hand-object interactions, we design a data glove that can be reconfigured to collect grasping data in three modes: (i) force exerted by hand using piezoresistive material, (ii) contact points by grasping stably in VR, and (iii) reconstruct both visual and physical effects during the manipulation by integrating physics-based simulation.

Rearrange Indoor Scenes for Human-Robot Co-Activity
Weiqi Wang*, Zihang Zhao*, Ziyuan Jiao*, Yixin Zhu, Song-Chun Zhu, Hangxin Liu
* equal contributors
IEEE International Conference on Robotics and Automation (ICRA), 2023
Paper / Video (YouTube, bilibili) / Project Page

We present an optimization framework to redesign an indoor scene by rearranging the furniture within it, which maximizes free space for service robots to operate while preserving human's preference for scene layout.

Scene Reconstruction with Functional Objects for Robot Autonomy
Muzhi Han*, Zeyu Zhang*, Ziyuan Jiao, Xu Xie, Yixin Zhu, Song-Chun Zhu, Hangxin Liu
* equal contributors
International Journal of Computer Vision (IJCV), 2022
Paper / Project Page

We rethink the problem of scene reconstruction from an embodied agent’s perspective. The objects within a reconstructed scene are segmented and replaced by part-based articulated CAD models to provide actionable information and afford finer-grained robot interactions.

Sequential Manipulation Planning on Scene Graph
Ziyuan Jiao, Yida Niu, Zeyu Zhang, Song-Chun Zhu, Yixin Zhu, Hangxin Liu
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022
Paper / Video (YouTube, bilibili) / Project Page

We devise a 3D scene graph representation to abstract scene layouts with succinct geometric information and valid robot-scene interactions, such that a valid task plan can be computed using graph editing distance between the initial and the final scene graph while effectively satisfying constraints in motion level.

Downwash-aware Control Allocation for Over-actuated UAV Platforms
Yao Su*, Chi Chu*, Meng Wang, Jiarui Li, Liu Yang, Yixin Zhu, Hangxin Liu
* equal contributors
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022
Paper / Video (YouTube, bilibili) / Project Page

Leveraging the input redundancy in over-actuated UAVs, we tackle downwash effects between propellers with a novel control allocation framework that explores the entire allocation space for an optimal solution that reduces counteractions of airflows.

Understanding Physical Effects for Effective Tool-use
Zeyu Zhang*, Ziyuan Jiao*, Weiqi Wang, Yixin Zhu, Song-Chun Zhu, Hangxin Liu
* equal contributors
IEEE Robotics and Automation Letters (RA-L+IROS), 2022
Paper / Video (vimeo, bilibili)

Learning key physical properties of tool-uses from a FEM-based simulation and enacting those properties via an optimal control-based motion planning scheme to produce tool-use strategies drastically different from observations, but with the least joint efforts.

Object Gathering with a Tethered Robot Duo
Yao Su, Yuhong Jiang, Yixin Zhu, Hangxin Liu
IEEE Robotics and Automation Letters (RA-L+ICRA), 2022
Paper / Video (vimeo, bilibili) / Project Page

A cooperative planning framework to generate optimal trajectories for a robot duo tethered by a flexible net to gather scattered objects spread in a large area. Implemented Model Reference Adaptive Control (MRAC) to handle unknown dynamics of carried payloads.

Patching Interpretable And-Or-Graph Knowledge Representation using Augmented Reality
Hangxin Liu, Yixin Zhu, Song-Chun Zhu
Applied AI Letters, 2021   (DARPA XAI Speical Issue)
Paper

Given an interpretable And-Or-Graph knowledge representation, the proposed AR interface allows users to intuitively understand and supervise robot's behaviors, and interactively teach the robot with new actions.

Consolidating Kinematic Models to Promote Coordinated Mobile Manipulations
Ziyuan Jiao*, Zeyu Zhang*, Xin Jiang, David Han, Song-Chun Zhu, Yixin Zhu, Hangxin Liu
* equal contributors
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021
Paper / Video / Code

Constructing a Virtual Kinematic Chain (VKC) that readily consolidates the kinematics of the mobile base, the arm, and the object to be manipulated in mobile manipulations.

Efficient Task Planning for Mobile Manipulation: a Virtual Kinematic Chain Perspective
Ziyuan Jiao*, Zeyu Zhang*, Weiqi Wang, David Han, Song-Chun Zhu, Yixin Zhu, Hangxin Liu
* equal contributors
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021
Paper / Video / Code

The VKC perspective is a simple yet effective method to improve task planning efficacy for mobile manipulation.

Reconstructing Interactive 3D Scenes by Panoptic Mapping and CAD Model Alignments
Muzhi Han*, Zeyu Zhang*, Ziyuan Jiao, Xu Xie, Yixin Zhu, Song-Chun Zhu, Hangxin Liu
* equal contributors
IEEE International Conference on Robotics and Automation (ICRA), 2021
Paper / Video / Code

Reconstructing an interactive scene from RGB-D data stream by panoptic mapping and organizing object affordance and contextual relations by a graph-based scene representation.

Human-Robot Interaction in a Shared Augmented Reality Workspace
Shuwen Qiu*, Hangxin Liu*, Zeyu Zhang, Yixin Zhu, Song-Chun Zhu
* equal contributors
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020
Paper / Video / Code / Presentation

Physical robots can control and alter virtual objects in AR as an active agent and proactively interact with human agents, instead of purely passively executing received commands.

WalkingBot: Modular Interactive Legged Robot with Automated Structure Sensing and Motion Planning
Meng Wang, Yao Su, Hangxin Liu, Yingqing Xu
IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2020
Paper

A modular robot system that allows non-expert users to build a multi-legged robot in various morphologies using a set of building blocks with sensors and actuators embedded.

Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs
Tao Yuan, Hangxin Liu, Lifeng Fan, Zilong Zheng, Tao Gao, Yixin Zhu, Song-Chun Zhu
IEEE International Conference on Robotics and Automation (ICRA), 2020
Paper / Video / Presentation

A graphical model to unify the representation of object states, robot knowledge, and human (false-)beliefs.

Congestion-aware Evacuation Routing using Augmented Reality Devices
Zeyu Zhang, Hangxin Liu, Ziyuan Jiao, Yixin Zhu, Song-Chun Zhu
IEEE International Conference on Robotics and Automation (ICRA), 2020
Paper / Video / Code / Presentation

An AR-based indoor evacuation system with a congestion-aware routing solution.

Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Human-like Commonsense
Yixin Zhu, Tao Gao, Lifeng Fan, Siyuan Huang, Mark Edmonds, Hangxin Liu, Feng Gao, Chi Zhang, Siyuan Qi, Ying Nian Wu, Joshua B. Tenenbaum, Song-Chun Zhu
Engineering, 2020
Paper

A comprehensive review on cognitive AI and visual commonsense (Functionality, Physics, Intension, Causality).

A tale of two explanations: Enhancing human trust by explaining robot behavior
Mark Edmonds*, Feng Gao*, Hangxin Liu*, Xu Xie*, Siyuan Qi, Brandon Rothrock, Yixin Zhu, Ying Nian Wu, Hongjing Lu, Song-Chun Zhu
* equal contributors
Science Robotics, 2019
Paper

Learning multi-modal knowledge representation and fostering human trusts by producing explanations.

VRGym: A Virtual Testbed for Physical and Interactive AI
Xu Xie, Hangxin Liu, Zhenliang Zhang, Yuxing Qiu, Feng Gao, Siyuan Qi, Yixin Zhu, Song-Chun Zhu
ACM Turing Celebration Conference - China (ACM TURC), 2019
Paper / Video / Code

A testbed with fine-grained physical effects and realistic human-robot interactions for training embodied agents.

Self-Supervised Incremental Learning for Sound Source Localization in Complex Indoor Environment
Hangxin Liu*, Zeyu Zhang*, Yixin Zhu, Song-Chun Zhu
* equal contributors
IEEE International Conference on Robotics and Automation (ICRA), 2019
Paper / Video / Code

Robot 'labels' received data by its own exploration and refines its predictive model on-the-fly.

.
High-Fidelity Grasping in Virtual Reality using a Glove-based System
Hangxin Liu*, Zhenliang Zhang*, Xu Xie, Yixin Zhu, Yue Liu, Yongtian Wang, Song-Chun Zhu
* equal contributors
IEEE International Conference on Robotics and Automation (ICRA), 2019
Paper / Video / Code

A data glove for natural human grasping activities in VR and for cost-effective grasping data collections.

Mirroring without Overimitation: Learning Functionally Equivalent Manipulation Actions
Hangxin Liu, Chi Zhang, Yixin Zhu, Chenfanfu Jiang, Song-Chun Zhu
AAAI Conference on Artificial Intelligence (AAAI), 2019
Paper

Learning functionally equivalent actions that produce similar effects instead of learning trajectory cues.

Interactive Robot Knowledge Patching using Augmented Reality
Hangxin Liu*, Yaofang Zhang*, Wenwen Si, Xu Xie, Yixin Zhu, Song-Chun Zhu
* equal contributors
IEEE International Conference on Robotics and Automation (ICRA), 2018
Paper / Video / Code

An AR system for users to diagnose robor's problems, correct wrong behaviors, and add the corrections to the robot's knowledge.

Unsupervised Learning using Hierarchical Models for Hand-Object Interactions
Xu Xie*, Hangxin Liu*, Mark Edmonds, Feng Gao, Siyuan Qi, Yixin Zhu, Brandon Rothrock, Song-Chun Zhu.
* equal contributors
IEEE International Conference on Robotics and Automation (ICRA), 2018
Paper / Video / Code

An unsupervised learning approach for manipulation event segmentation and parsing using hand gestures and forces data.

A Glove-based System for Studying Hand-Object Manipulation via Joint Pose and Force Sensing
Hangxin Liu*, Xu Xie*, Matt Millar*, Mark Edmonds, Feng Gao, Yixin Zhu, Veronica Santos, Brandon Rothrock, Song-Chun Zhu.
* equal contributors
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017
Paper / Code

A easy-to-replicate glove-based system for real time collections of hand pose and force during fine manipulative actions.

Feeling the Force: Integrating Force and Pose for Fluent Discovery through Imitation Learning to Open Medicine Bottles
Mark Edmonds*, Feng Gao*, Xu Xie, Hangxin Liu, Siyuan Qi, Yixin Zhu, Brandon Rothrock, Song-Chun Zhu.
* equal contributors
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017
Paper / Video / Code

Learning an action planner through both a top-down stochastic grammar model (And-Or graph) and a bottom-up discriminative model from the observed poses and forces

Reliable Infrastructural Urban Traffic Monitoring Via Lidar and Camera Fusion
Yi Tian, Hangxin Liu, Tomonari Furukawa.
SAE International Journal of Passenger Cars-Electronic and Electrical Systems, 2017
Paper

Non-Field-Of-View Sound Source Localization Using Diffraction and Reflection Signals
Kuya Takami, Hangxin Liu, Tomonari Furukawa, Makoto Kumon, Gamini Dissanayake.
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016
Paper

Design of Highly Reliable Infrastructural Traffic Monitoring Using Laser and Vision Sensors
Hangxin Liu, Yi Tian, Tomonari Furukawa.
ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (ASME IDETC), 2016
Paper

Recursive Bayesian Estimation of NFOV Target using Diffraction and Reflection Signals
Kuya Takami, Hangxin Liu, Tomonari Furukawa, Makoto Kumon, Gamini Dissanayake.
ISIF International Conference on Information Fusion (FUSION), 2016
Paper


Design and source code from Jon Barron's website