Earth Day may have been a week ago, but the earth is important every day. Here’s an interesting look at how AI can help save the planet in this article focusing on Intel’s efforts in this space.

Since 2017, Intel has been collaborating with Parley for the Oceans* and a team of marine biologists to collect the mucus exhaled from whales when they surface to breathe, then utilizing AI technology to analyze indicators or the whale’s health in real time. By developing a greater understand what’s threatening their ecosystem, we can better protect whales. Learn more about the Parley* SnotBot and see behind the scenes with the team working on the project.

Over the last decade or so, open source has blossomed into a major movement and the backbone of the tech industry. For instance, check out this project that Uber, yes Uber, has open sourced.

Ludwig is a TensorFlow-based toolbox that allows you to train and test deep learning models without the need to write any of the code. Incubated at Uber for the last two years, Ludwig was finally open sourced this February to incorporate the contributions of the data science community. With Ludwig, a data scientist can train a deep learning model by simply providing a CSV file that contains the training data as well as the YAML file with the outputs and inputs of the model.

In this talk, Andrei Varanoch demonstrates the blueprint for such a Lambda Architecture implementation in Microsoft Azure, with Azure Databricks — a PaaS Spark offering – as a key component.  The term “Lambda Architecture” stands for a generic, scalable and fault-tolerant data processing architecture. As the hyper-scale now offers a various PaaS services for data ingestion, storage and processing, the need for a revised, cloud-native implementation of the lambda architecture is arising.