Facebook doesn’t just want its AI-trained bots to know how humans speak—it wants them to understand our faces, too.
In a newly published paper, the company’s AI researchers detail their efforts to train a bot to mimic human facial expressions during a conversation.
The researchers trained the bots using a series of YouTube videos of people having Skype conversations where each participant’s face was clearly visible. They then used these videos as training data for their AI system.
Notably, the researchers didn’t teach the bot to recognize a particular type of expression, or the emotions associated with them, like “happy” or “sad.” Instead, they trained the system to recognize subtle patterns in users’ faces. These patterns, sometimes referred to as “micro expressions,” tend to be similar in everyone, even though our faces may look very different.
The image below shows the type of patterns the system learned to recognize.
By learning these patterns, the system was able to predict which expressions looked more human-like. The researchers tested out their bot’s newfound abilities using human judges who were asked whether they believed the bot’s animated expressions were believable.
The researchers don’t delve into specific practical applications for their method, though they note that human-bot interactions are most effective when people are engaged with the “agent” (research speak for robot) they are interacting with.
Now, Facebook isn’t making a humanoid robot — that we know about at least — but it’s not difficult to imagine this type of research impacting other areas Facebook has invested in, like virtual reality, though the researchers don’t address this particular use case.
The company has made big investments in social applications for virtual reality, including one called Facebook Spaces, which allows participants to interact with each other’s avatars in a virtual environment. Their latest research could perhaps one day have implications on Facebook’s efforts to improve avatars in VR.
The research is still in its early days for now, but the scientists say they hope their work will inspire more groups to look at similar situations. You can read their full paper here.