An AI program will soon be here to help your deepfake dancing – just don’t call it deepfake

This looks suspiciously like the move we’ve always wanted. Picture: UC Berkley

One of the more controversial applications of AI on the internet, deepfakes, is here to show you how to dance.

Or at least, help you fool your friends into thinking you can floss like a 12-year-old, which no dignified adult should ever be caught doing.

Still, if you must, researchers at UC Berkley have just published a paper outlining how they are using AI to copy one person’s dance moves and seemingly possess another person’s body to force them into the same moves.

Here’s a quick taste. The dancer is on the left, the two wannabes on the right:


You won’t find the word “deepfake” in the paper. The team prefers to call the method “‘do as I do’ motion transfer”.

Deepfake is the popular term for an AI-based technique that, broadly speaking, superimposes parts of one image onto another. The first widely reported use of real-time facial reenactment, by Professer Matthias Nießner at Visual Computing Group, was reported back in 2016.

It was cool, and kind of amusing:

Until about 12 months later, when videos of what appeared to be Wonder Woman actress Gal Gadot having sex with her step-brother began popping up. And equally scandalous videos of the likes of Scarlett Johannson and Emma Watson and so on and so on.

In keeping with how the internet has evolved since forever, porn was once again right there at the forefront of enabling technology in imaginative and opportunistic new ways.

In April this year, even Barack Obama was deepfaked in a PSA warning about the danger of deepfakes, thanks to the met genius of BuzzFeed CEO Jonah Peretti and Get Out director Jordan Peele.

Deepfakes are not cool, kids. Not only are they banned on reddit, you can now be prosecuted in UK for using them inappropriately.

Of course, people have been faking images to hurt each other ever since images were invented, but the level of realism deepfake technology can achieve now is at a point where it’s possible specifics laws will need to be written to counter its dangerous misuse.

Given its rich history then, it’s no suprise the UC Berkley team would rather stick to the “do as I do” naming convention. But they really have made a quantum leap from deepfaking someone’s face onto another body into deepfaking an entire body. Here’s the full mesmerising video:

It all happens within minutes of the source performing the original moves.

“With our framework, we create a variety of videos, enabling untrained amateurs to spin and twirl like ballerinas, perform martial arts kicks or dance as vibrantly as pop stars,” they say.

Right now, you need some good storage solutions – which today you can get with an expensive mobile phone.

“We filmed our target subject for around 20 minutes of real time footage at 120 frames per second,” the team says.

The short version says a source video is found and a sub-program renders both bodies into stick figures.

Several neural networks then go to work transferring the moves of the dancer onto the target, and smoothing the target body and face out.

It’s not perfect. The most obvious giveaway in the full video is probably in the target’s face, which is a little robotic:

Picture: UC Berkley

The program isn’t quite up to handling loose clothing yet, either, different limb lengths in the participants, and a couple of the more subtle moves.

Obviously, someone will fix that in a matter of months and deepfake dancing will be at your fingertips, in the same way the FakeApp unleashed chaos on the internet when it was released back in January.

Fake news just leveled up.