Two Minute Papers shows off the fascinating work in the paper “Few-Shot Adversarial Learning of Realistic Neural Talking Head Models.” Could this be the next evolution of GANs? It certainly will empower a whole new wave of deep fakes.
What a time to be alive, indeed!
In this TED talk, Supasorn Suwajanakorn talks about the more positive outcomes of deep fake technology.
If you’re in the AI space, you’ve likely already have heard of Deep Fakes, a highly realistic generated video powered by GANs. This week, they are the subject of a House Intelligence Committee hearing on Thursday. As Miles O’Brien reports in the slightly panic-mongering news segment, the accelerating speed of computers and advances in machine learning make deep fakes ever more difficult to detect, among growing fears of their weaponization.
As the machines are get smarter, they have reached the point where they learn by themselves and, even make their own decisions.
Here’s an interesting look at 10 times AI displayed amazing capabilities
There are machines that dream, read words in people’s brains, and evolve themselves into art masters. The darker skills are enough to make anyone […]
Two Minute Papers explores the paper “Deferred Neural Rendering: Image Synthesis using Neural Textures” and how it will create even more believable Deep Fakes.
Here’s an interesting look at the implications of Deep Fakes and how the state of the art in the field is moving faster all the time.
As the quality of deep fakes get better and they continue to get easier and faster to make, the technology’s potential for harm grows ever greater.
Bloomberg’s QuickTake explains how good deep fakes have gotten in the last few months, and what’s being done to counter them.
If you thought the technology behind Deep Fakes was impressive, then you will be floored by this demonstration of “Video-to-Video Synthesis.”
Best of all: the code is available on GitHub
I’ve blogged about the technology behind Deep Fakes before, but here’s a look at the technology from BBC Click.