Skip to main content

Learning from Human Directional Corrections

Title:Learning from Human Directional Corrections
Publication Type:Journal Article
Year of Publication:2023
Authors: W. Jin, T. D. Murphey, Z. Lu, and S. Mou
Journal Title:IEEE Transactions on Robotics
Pages:625-644
Date Published:February 2023
URL:https://ieeexplore.ieee.org/document/9852712
DOI:10.1109/TRO.2022.3190221
Abstract:This article proposes a novel approach that enables a robot to learn an objective function incrementally from human directional corrections. Existing methods learn from human magnitude corrections; since a human needs to carefully choose the magnitude of each correction, those methods can easily lead to overcorrections and learning inefficiency. The proposed method only requires human directional corrections—corrections that only indicate the direction of an input change without indicating its magnitude. We only assume that each correction, regardless of its magnitude, points in a direction that improves the robot's current motion relative to an unknown objective function. The allowable corrections satisfying this assumption account for half of the input space, as opposed to the magnitude corrections that have to lie in a shrinking level set. For each directional correction, the proposed method updates the estimate of the objective function based on a cutting plane method, which has a geometric interpretation. We have established theoretical results to show the convergence of the learning process. The proposed method has been tested in numerical examples, a user study on two human–robot games, and a real-world quadrotor experiment. The results confirm the convergence of the proposed method and further show that the method is significantly more effective (higher success rate), efficient/effortless (less human corrections needed), and potentially more accessible (fewer early wasted trials) than the state-of-the-art robot learning frameworks.
Back to top