Friday, April 26, 2024

Household robots could learn skills in a virtual world where chores never end

Share

[youtube https://www.youtube.com/watch?v=qAMJvKpyPP4?feature=oembed&w=100&h=100]

Before robots can really help out around the home, they may have to train in a virtual world. That’s the aim of a new research project in which a Sims-like system called “VirtualHome” helps artificial intelligence (A.I.) characters perform everyday activities, one step at a time.

For us, VirtualHome looks like a Lynchian nightmare, where chores never end. For robots, it’s something of a training ground.

Humans have a talent for inference, and we take that for granted. If you were told to vacuum the rug, you’d presumably have no problem completing the task without having to break it down into each of its individual steps — like walk to the closet, open the closet, grab the vacuum, move the vacuum, plug it in, etc. Machines, on the other hand, need to process each one of these subtasks to get the job done.

The goal of VirtualHome is to help robots learn tasks by first experiencing them in a virtual system. In the current system, an avatar can perform 1,000 separate actions in eight different settings, including a living room, kitchen, and home office. The project is led by researchers at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the University of Toronto, McGill University, and the University of Ljubljana in Slovenia.

“We were trying to find a way to model complex activities to better understand the steps needed to do them, so that we could better identify them in video and potentially teach robots to perform them,” Xavier Puig, a CSAIL doctoral student who led the research, told Digital Trends. “There are very few datasets that have videos of people or agents doing step-by-step household activities.”

Puig and his colleagues created programs that laid out each step of a specific task in skeletal detail. For example, the task of watching TV involves five subtasks: Walk to the TV, turn on the TV, walk to the sofa, sit on the sofa, and watch the TV.

The researchers gave these program instructions to the VirtualHome system, which had the character act out these tasks. Videos of the agent performing these chores would be used to further train robots, by giving a visual example of what these actions look like.

In a recent paper, the researchers demonstrated that, by reviewing instructions or a video demo, their virtual agents could reconstruct the steps and perform the task. They hope that such a system could help train household robots and build up a database of tasks that can be easily communicated between humans and machines, through natural language processing.

The findings will be presented this month at the Computer Vision and Pattern Recognition conference in Salt Lake City, Utah.

Editors’ Recommendations

  • Inside the ambitious plan to decode and digitize the Vatican Secret Archives
  • Machine learning? Neural networks? Here’s your guide to the many flavors of A.I.
  • Forget cloning dogs, A.I. is the real way to let your pooch live forever
  • MIT’s new A.I. could help map the roads Google hasn’t gotten to yet
  • The best iRobot Roomba deals to make cleaning your home a breeze


Read more

More News