Deepfakes leverage powerful techniques from machine learning and artificial intelligence to generate visual and audio content with a such a high degree of realism that it has enormous potential to deceive.

This article in Medium explores efforts into research and development into creating countermeasures to such bogus content.

Within recent months, a number of mitigation mechanisms have been proposed and cited with the use of Neural Networks and Artificial Intelligence being at the heart of them. From this, we can distinguish that a proposal for technologies that can automatically detect and assess the integrity of visual media is therefore indispensable and in great need if we wish to fight back against adversarial attacks. (Nguyen, 2019)

Siraj Raval explores generative modeling technology.

This innovation is changing the face of the Internet as you read this. It’s now possible to design automated systems that can write novels, act as talking heads in videos, and compose music.

In this episode, Siraj explains how generative modeling works by demoing 3 examples that you can try yourself in your web browser. 

Siraj Raval generates his own voice with AI using some cutting edge techniques.

This is a relatively new technology and people have started generating not just celebrity voices, but entire musical pieces as well. The technology to generate sounds, both voices & music, has been rapidly improving the past few years thanks to deep learning. In this episode, I’ll first demo some AI generated music. Then, i’ll explain the physics of a waveform and how DeepMind used waveform-based data to generate some pretty realistic sounds in 2016. At the end, I’ll describe the cutting edge of generative sound modeling, a paper released just 2 months ago called “MelNet”. Enjoy!