Robust In-Hand Reorientation with Hierarchical RL-Based Motion Primitives and Model-Based Regrasping
Code | Appendix | Dataset | CAD Files
The proposed approach won the championship of the in-hand manipulation track of the 9th Robotic Grasping and Manipulation Competition (RGMC) held at ICRA 2024. Additionally, it was awarded the Most Elegant Solution among all tracks of the RGMC.
The paper has been submitted to IEEE RA-P.
Video
Abstract
In-hand manipulation has become increasingly popular in recent robotics research, probably due to the growing trend towards humanoids and Artificial General Intelligence (AGI). Although existing works show promising results, they are typically limited to laboratory conditions. The requirements of expensive dexterous hands, depth cameras and tactile sensors are challenging for algorithm reproduction and large-scale applications. To address this issue, this paper proposes a practical solution to the classic in-hand reorientation task. The proposed method is characterized by a hierarchical structure. Specifically, several in-hand motion primitives, such as object rotation and flipping, are trained using Reinforcement Learning (RL). A high-level decision module switches between these motion primitives to achieve continuous in-hand reorientation. The proposed method runs on a low-cost LEAP Hand and requires only a single RGB camera. The proposed method is validated on a cube reorientation task benchmarked at the 9th Robotic Grasping and Manipulation Competition (RGMC) at ICRA 2024. Implementation details and evaluation results are discussed in this paper. We also open-source hardware designs, code, and videos to encourage further development in this area.
Contact
If you have any question, feel free to contact the authors: Yongpeng Jiang, jiangyp19@gmail.com .
Yongpeng Jiang’s Homepage is at https://director-of-g.github.io/.