Two Minute Papers explains the paper “Adversarial Latent Autoencoders.”

Two Minute Papers explains the paper “Adversarial Latent Autoencoders.”
Two Minute Papers explores the paper “Neural Voice Puppetry: Audio-driven Facial Reenactment“.
The blog post on #deepfakes is available here:
https://www.wandb.com/articles/improving-deepfake-performance-with-data
Two Minute Papers takes a look at the paper “First Order Motion Model for Image Animation” and its source code are available here:
Two Minute Papers examines the paper “CNN-generated images are surprisingly easy to spot…for now.”
Two Minute Papers dives into the paper “Analyzing and Improving the Image Quality of #StyleGAN.”
Source code is available here:- https://github.com/NVlabs/stylegan2
You can even try it.
Two Minute Papers examines the paper “Few-shot Video-to-Video Synthesis” in this video.
The Economist examines the political ramifications of DeepFakes.
FlippedNormals takes a look at future of CG art. Learn about some of the tools available today, as well as their predictions for the future.