Recently, the researchers from MIT introduced a new AI system known as Timecraft that has the capability to synthesize time-lapse videos depicting how a given painting might have been created.

According to the researchers, there are various possibilities and unique combinations of brushes, strokes, colors, etc. in a painting and the goal behind this research is to learn to capture this rich range of possibilities.

Creating the exact same piece of a famous painting can take days even by skilled artists. However, with the advent of AI and ML, we have witnessed the emergence of a number of AI Artists for a few years now. One of the most popular artisanship of AI is the portrait of Edmond Belamy that was created by Generative Adversarial Network (GAN) and sold for an incredible $432,500.

There was once a time when folks pondered whether or not open source would be a viable business model.

Today, that sounds comical, a there are numerous open-source tech companies today, some of which have gone beyond $100 million (or even $1 billion) in their annual revenue including RedHat, MongoDB, Cloudera, MuleSoft, Hashicorp, Databricks (Spark) and Confluent (Kafka).

Why do tech companies open source their products?

“Open-source is an enabler of innovation, giving organisations access to a global pool of talent and the tools to develop secure, reliable and scalable software – fast. The organisations that are most effectively speeding up business transformation are those who have turned to open-source software development to succeed in a fast-changing, digital world,” told Maneesh Sharma, General Manager of Github India in an interview with Analytics India Magazine.

Databricks, the company behind the commercial development of Apache Spark, is placing its machine learning lifecycle project MLflow under the stewardship of the Linux Foundation.

MLflow provides a programmatic way to deal with all the pieces of a machine learning project through all its phases — construction, training, fine-tuning, deployment, management, and revision. It tracks and manages the the datasets, model instances, model parameters, and algorithms used in machine learning projects, so they can be versioned, stored in a central repository, and repackaged easily for reuse by other data scientists.