A German doctoral student’s research is moving us ever closer to an AI skill that, as of yet, has been unrealized: improvisation.
According to Sweden’s Chalmers University of Technology, robots don’t work the same way. They need exact instructions, and imprecision can disrupt a whole workflow. That’s where Maximilian Diehl comes in with his research project that aims to develop a new way of training AIs that leaves room to operate in changeable environments.
In particular, Diehl is concerned with building AIs that can work alongside people and adapt to the unpredictable nature of human behavior. “Robots that work in human environments need to be adaptable to the fact that humans are unique, and that we might all solve the same task in a different way,” Diehl said.
In other words, a robot that works alongside humans needs to be able to adapt to objects left out of place, people in its way, and other elements of human chaos.
Diehl also wants the AI to be explainable, meaning it can make clear to humans how and why it made a decision, and to take a human-like approach to solving problems. Chalmers Assistant Professor of Electrical Engineering Karinne Ramirez-Amaro said humans organize a single task into sub-goals that can be more immediately performed towards a specific end.
“Instead of teaching the robot an exact imitation of human behavior, we focused on identifying what the goals were, looking at all the actions that the people in the study performed,” Ramirez-Amaro said.
The objective for the researchers was to get a robot arm to stack a pile of blocks, but with a different set of starting conditions each time. To get the data needed to train the robot, the researchers had volunteers stack the blocks in a VR environment and tracked their movements with lasers, and that’s all the robot got.
TIAGo robot arm used in the experiment
“The AI focused on extracting the intent of the sub-goals, and built libraries consisting of different actions for each one,” Chalmers University said. The AI also created a plan that could be used by the TIAGo robot, pictured above, which they said “was able to automatically generate a plan for a given task of stacking cubes… even when surrounding conditions were changed.”
Diehl said the robot was able to make plans with a 92 percent success rate after watching a single human demonstration. By adding data from 11 additional trials, the success rate rose to 100 percent.
For the next phase of the project, Diehl and his team will work on developing a method for helping robots communicate with humans to explain how and why something goes wrong.
“It might still take several years until we see genuinely autonomous and multi-purpose robots, mainly because many individual challenges still need to be addressed. However, we believe that our approach will contribute to speeding up the learning process of robots, allowing them to connect all these aspects and apply them in new situations,” Diehl said. ®