For fans of Hayao Miyazaki , Makoto Shinkai and Satoshi Kon, a Chinese research team from Wuhan University and Hubei University of Technology has created a way to turn photographs into anime-style background art.

Dubbed “AnimeGAN: A Novel Lightweight GAN for Photo Animation,” the technology uses machine learning through neural style transfer and generative adversarial networks (GANs). The development achieves fast and high-quality results with a light framework.

The study’s authors Jie Chen, Gang Liu, and Xin Chen submitted their findings to the International Symposium on Intelligence Computation and Applications back in 2019 and was recently highlighted by Japanese tech outlet ITMedia. AnimeGAN can help artists save time when illustrating lines, textures, colors, and shadows related to realistic backgrounds.

Recently, the researchers from MIT introduced a new AI system known as Timecraft that has the capability to synthesize time-lapse videos depicting how a given painting might have been created.

According to the researchers, there are various possibilities and unique combinations of brushes, strokes, colors, etc. in a painting and the goal behind this research is to learn to capture this rich range of possibilities.

Creating the exact same piece of a famous painting can take days even by skilled artists. However, with the advent of AI and ML, we have witnessed the emergence of a number of AI Artists for a few years now. One of the most popular artisanship of AI is the portrait of Edmond Belamy that was created by Generative Adversarial Network (GAN) and sold for an incredible $432,500.

Machine learning is capable of doing some amazing things. However, the state of the art tends to be limited to academic and large corporate institutions. What would happen if artists, filmmakers, and the creative community had access to cutting edge technology without the heavy investment in research and development.

The Verge looks into just that.

Say you’re an animator on a budget who wants to turn a video of a human actor into a 3D model. Instead of hiring expensive motion capture equipment, you could use Runway to apply a neural network called “PosetNet” to your footage, creating wireframe models of your actor that can then be exported for animation.

Art has always reflected the zeitgeist of the times. AI is proving to be no exception. Here’s an interesting look at Holly Herndon’s recent work in this space.

Like so much art these days, Herndon’s work is a reflection of the times. Its nuanced synthesis of electronic manipulation and pop songcraft — including choral vocals from an international ensemble, digitized voice renderings from an AI baby she’s raising named Spawn, and her own voice that’s enhanced with vocoder — feels like a move to shift genre and conversation. To that end, PROTO‘s thoughtful, futuristic concept is very much in the vein of game-changing albums like Kanye West’s Yeezus.

Just over two years ago, The Met launched an Open Access Program seeking to make the images and data of public-domain works in the museum’s collection available under an open data promise. 

The program fills an important role in The Met’s mission to broaden global reach by making the museum’s collection one of the most accessible, discoverable, and useful on the internet. See how The Met is now working to generate new knowledge about each artwork at scale and uncover latent insights with AI.