Whether you realize it or not, lambda calculus has already impacted your world as a data scientist or a developer. If you’ve played around in functional programming languages like Haskell or F#, then you are familiar some of the same ideas. In fact, AWS’s serverless product is named Lambda after this branch of mathematics.

Watch this video to learn about lambda calculus.

Here’s an interesting article on “oscillatory neural networks” and how physicists trained it to perform image recognition.

An oscillatory neural network is a complex interlacing of interacting elements (oscillators) that are able to receive and transmit oscillations of a certain frequency. Receiving signals of various frequencies from preceding elements, the artificial neuron oscillator can synchronize its rhythm with these fluctuations. As a result, […]

Here’s another article on the advances made on neural network design.

A new area in artificial intelligence involves using algorithms to automatically design machine-learning systems known as neural networks, which are more accurate and efficient than those developed by human engineers. But this so-called neural architecture search (NAS) technique is computationally expensive. A state-of-the-art NAS algorithm recently developed by Google […]

Soon, AI will make artists of us all — no matter how well (or how poorly) you can draw.

Check out this article on GauGAN.

The neural network isn’t simply replacing doodles and shapes with photorealistic images of rocks, mountains, skies, or water. In addition to taking into account the original shape of the drawing, GauGAN also takes into account other objects in the scene. Turn a patch of grass into a pond and it will create reflections on the surface based on what’s surrounding the new body of water.

Last last week, Google’s AI research division open-sourced GPipe, a library for “efficiently” training deep neural networks under Lingvo, a TensorFlow framework for sequence modeling.

Most of GPipe’s performance gains come from better memory allocation for AI models. On second-generation Google Cloud tensor processing units (TPUs), each of which contains eight processor cores and 64 GB memory (8 GB per core), GPipe reduced intermediate memory usage from 6.26 GB to 3.46GB, enabling 318 million parameters on a single accelerator core. Without GPipe, Huang says, a single core can only train up to 82 million model parameters.

Generally speaking, Neural Networks are somewhat of a mystery. While you can understand the mechanics and the math that powers them, exactly how the network comes to its conclusions are a bit of a black box.

Here’s an interesting story on how researchers are trying to peer into the mysteries of a neural net.

Using an “activation atlas,” researchers can plumb the hidden depths of a neural network and study how it learns visual concepts. Shan Carter, a researcher at Google Brain, recently visited his daughter’s second-grade class with an unusual payload: an array of psychedelic pictures filled with indistinct shapes and warped […]