Cornell University researchers have developed a headset that receives sound echoes off the cheeks, converts the echo into an avatar of a fully moving face, and transmits the speaker´s facial expressions as he speaks.
The team described what can be obtained from the system called "EarIO", "a low-power acoustic sensor for continuous tracking of detailed facial movements."
The team, led by Assistant Professor of Information Sciences Cheng Chang and Professor of Information Sciences Francois Gembertier, designed an Airio system that transmits facial movements to a smartphone in real time and is compatible with commercially available headsets for hands-free wireless video conferencing. .
“Devices that track facial movements using a camera are large and heavy and require a lot of power, which is a big problem for wearable devices,” Zhang says in the press release posted on the Cornell University website.
"It is also important that it captures a lot of private information," he added, stressing that face tracking through voice technology could provide better privacy, affordability, convenience and better battery life.
The newly invented Air IO device works like a ship that sends out sonar pulses. A loudspeaker on each side of the earpiece sends audio signals to both sides of the face. The microphone picks up the echo. When the wearer speaks, smiles, or raises his eyebrows, the skin moves and stretches. It changes the echo features.
The researchers´ deep-learning algorithm uses artificial intelligence to continuously process data, translating changing echoes into full facial expressions.