Large language models will teach pet robots to correct mistakes on their own
MIT
Researchers at the Massachusetts Institute of Technology (MIT) have developed a new approach that allows home robots to use large language models (LLMs) to self-correct errors during tasks without requiring human intervention.
Here's What We Know
Traditionally, robots run out of their programmed capabilities when faced with problems, after which they require operator assistance. However, at home, every change in the environment can interfere with the robot's performance, forcing it to restart the task from the beginning.
The new technique, which will be presented at the International Conference on Learning Representations (ICLR) in May, uses LLM to break down demonstration tasks into smaller subtasks. This allows the robot to automatically recognise where it stands and autonomously plan further actions in case of failures.
-"LLMs have a way to tell you how to do each step of a task, in natural language. A human’s continuous demonstration is the embodiment of those steps, in physical space. And we wanted to connect the two, so that a robot would automatically know what stage it is in a task, and be able to replan and recover on its own" said PhD student Tsun-Hsuan Wang.
In the experiments, the robot was shown performing the task of transferring balls from one container to another. The researchers introduced small disturbances, such as throwing the robot off course or knocking balls out of its spoon. Thanks to the LLM, the robot was able to correct its actions and resume its work without starting over.
-"With our method, when the robot is making mistakes, we don’t need to ask humans to program or give extra demonstrations of how to recover from failures" noted Wang.
The scientists expect that the application of LLM in home robotics will overcome one of the key obstacles to mass adoption of such devices.
Source: TechCrunch