A Review of Imitation Learning in Natural Human-Robot Interaction

Authors

  • Zhenyu Shi

DOI:

https://doi.org/10.56028/aetr.15.1.1442.2025

Keywords:

Imitation learning, human-robot interaction, gesture imitation, gaze imitation, contact-rich tasks.

Abstract

This paper reviews over twenty recent studies on applying imitation learning (IL) to natural human-robot interaction (HRI). Traditional rule-based behavior modeling offers transparency and controllability, but there is a lack of flexibility and adaptability. This motivates the adoption of IL, allowing robots to acquire desired behaviors directly from demonstrations. This review summarizes representative IL applications in gesture imitation, gaze imitation, and contact-rich tasks, highlighting their contributions to enhancing the social interactivity of robots. Current challenges include model collapse, limited interpretability, and difficulty collecting contact-dynamics data. Finally, future research directions are outlined, such as mitigating model collapse through real-data augmented reflow (RA Reflow) and its updated versions, increasing interpretability with the R2RISE framework, and achieving data-efficient IL via multi-sensor fusion and high-dimensional tactile representations from limited demonstrations.

Downloads

Published

2025-11-20