The AI That Recreated Obama

V1 logo


The AI That Recreated ObamaKate Hollowood
February 9, 2018

Digital versions of people could soon be putting words in their mouths. Technology is making it possible to make fake, but convincing, movies of real people. Then the trick would be finding ways to tell the real people apart.

New technology shows that it’s possible to create fake but highly convincing movies of anyone, and the researchers behind it have started with former US President, Barack Obama.

The AI software analysed 17 hours of footage of Obama and learned how the shape of his mouth, wrinkles and chin change as he talks. It is now capable of taking a piece of Obama audio and creating a realistic video where he appears to be saying the words.

“Creating a photorealistic talking head model – a virtual character that sounds and appears real, has long been a goal both in digital special effects and in the computer graphics research community,” reads the University of Washington study, which was published in July 2017.

However, before the recent explosion of freely available video online, data was more limited. This led previous experiments to take on the more laborious task of filming subjects reading a phonetically rich script in labs in order to generate enough data.

Some have expressed concerns that the AI system could add a dangerous new dimension to the issue of text-based fake news. However, the researchers explain that it would be easy to create a tool that could distinguish between real and synthesised videos. While giveaway differences might be invisible to the human eye, such as slight blurring around the teeth and lips, a computer would be able to pick them up.   

The study, which was funded by Samsung, Google, Facebook, Intel and the University of Washington, suggests the technology could one day be used to create simulated versions of real people in movies and virtual or augmented reality games. It could also generate videos with images to help people with hearing loss lip-read during an audio call.

Video calls could be improved by the tech, too. This is because when a teleconference call buffers and the video crashes, the audio is often unaffected. The AI could help fill in the blanks, creating a smoother (and less embarrassing) experience.

The researchers note that currently the technology cannot predict and model the emotion behind a piece of audio. Gaining some emotional intelligence will be likely be crucial in order for the AI to flourish, particularly in creating digital humans for the gaming and film industries.

Related Posts:
V1. Editions: 
No items found.

Join the V1. family of subscribers and discover a better way to work!

FREE BONUS REPORT: A New Generation of Work
Password requires 8 characters minimum
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.