Generative Adversarial Networks (GANs) hold the state-of-the-art when it comes to image generation.
However, while the rest of computer vision is slowly taken over by transformers or other attention-based architectures, all working GANs to date contain some form of convolutional layers. This paper changes that and builds TransGAN, the first GAN where both the generator and the discriminator are transformers. The discriminator is taken over from ViT (an image is worth 16×16 words), and the generator uses pixelshuffle to successfully up-sample the generated resolution. Three tricks make training work: Data augmentations using DiffAug, an auxiliary superresolution task, and a localized initialization of self-attention.
Their largest model reaches competitive performance with the best convolutional GANs on CIFAR10, STL-10, and CelebA.
Thomas Maurer joins Scott Hanselman to show how developers can be more collaborative in developing, deploying, and managing applications across multiple environments and clouds.
Build and deploy a truly consistent app experience everywhere in your hybrid cloud using your existing DevOps pipelines, Kubernetes manifests, and Helm Charts – as well as your choice of tools. Azure Arc is integrated with GitHub, Azure Monitor, Security Center, Update and more.
Karolina Sowinska discusses the most important topic in Machine Learning right now, namely model explainability.
It is one of the hottest discussion points in the data community, because ultimately if we cannot understand how the models arrive at the predictions, it renders them useless in many practical applications.
In an otherwise unremarkable-looking cluster of industrial buildings somewhere in the southeast of France, a team of engineers is attempting to tackle one of science’s most intractable problems – how to summon the power of a star.
If they pull it off, they’ll solve mankind’s greatest existential problems in one stroke.