Rendering is a complex process. Its differentiation cannot be uniquely defined; thus, a straightforward integration into neural networks is impossible.

Differentiable rendering (DR) constitutes a family of techniques that tackle such integration for end-to-end optimization by obtaining useful gradients of the rendering process.

Nvidia and Aalto University introduce a modular primitive to provide high-performance primitive operations for rasterization-based differentiable rendering. The proposed modular primitive uses highly optimized hardware graphics pipelines to deliver better performance than previous differentiable rendering systems.

Despite advances of utility (cloud) computing, could we still run out of compute power when it comes to AI innovation.

Could this throw us into another AI Winter?

Modern computers are out of their depth when it comes to deep learning and AI, according to recent research from MIT shared on a pre-print website. Modern computers can’t handle perpetual AI scaling In essence, we’ve exhausted the computing potential of modern computers , and researchers say we’ll soon […]

This is a fascinating development. We’re going to need real innovation in hardware (software, too), especially as Moore’s Law starts to run out of steam.

Computer scientists from Rice University, along with collaborators from Intel, have developed a more cost-efficient alternative to GPU.

The new algorithm is called “sub-linear deep learning engine” (SLIDE), and it uses general-purpose central processing units (CPUs) without specialized acceleration hardware.

One of the biggest challenges within artificial intelligence (AI) surrounds specialized acceleration hardware such as graphics processing units (GPUs). Before the new developments, it was believed that in order to speed up deep learning technology, the use of this specialized acceleration hardware was required.

Here’s a great guide on how to turn your sweet PC gaming rig into a lean, mean machine learning machine.

Installation for Anaconda3 is straightforward. Just follow the prompts in the visual installer and install on your computer. Note that if you install for all users, you’ll have to get in the habit of running some Anaconda related things as administrator for permission purposes.

In case you thought that the AI market was cooling off, have a look at this. (Emphasis added)

Based on the component, the market has been divided into hardware, software, and services. The software segment accounted for the significant share of the market in 2018 due to the high adoption of cloud-based software. This can be attributed to improved cloud infrastructure and hosting parameters.

However, the hardware segment is anticipated to observe the fastest growth rate during the forecast period. This is due to the growing demand for hardware optimized for machine learning, an increasing number of hardware providers, and technological development such as customized silicon chips with machine learning and artificial intelligence capabilities.

Yes, hardware is going to become more and more important to AI.

Hardware is getting interesting again.

Here’s an interesting paper published in Nature about Neuromorphic Computing.

Abstract below:

Guided by brain-like ‘spiking’ computational frameworks, neuromorphic computing—brain-inspired computing for machine intelligence—promises to realize artificial intelligence while reducing the energy requirements of computing platforms. This interdisciplinary field began with the implementation of silicon circuits for biological neural routines, but has evolved to encompass the hardware implementation of algorithms with spike-based encoding and event-driven representations. Here we provide an overview of the developments in neuromorphic computing for both algorithms and hardware and highlight the fundamentals of learning and hardware frameworks. We discuss the main challenges and the future prospects of neuromorphic computing, with emphasis on algorithm–hardware codesign.

Here’s an interesting look at what the next decade holds for AI and why hardware is going to be a big part of it.

“What we see happening in the transition to now and toward 2020 is what I call the coming of age of deep learning,” says Singer, pictured below with an NNP-I chip, tells The Next Platform. “This is where the capabilities have been better understood, where many companies are starting to understand how this might be applicable to their particular line of business. There’s a whole new generation of data scientists and other professionals who understand the field, there’s an environment for developing new algorithms and new topologies for the deep learning frameworks. All those frameworks like TensorFlow and MXNet were not really in existence in 2015. It was all hand-tooled and so on. Now there are environments, there is a large cadre of people who are trained on that, there’s a better understanding of the mapping, there’s a better understanding of the data because it all depends on who is using the data and how to use the data.”

Tony Gambacorta shows you how to explore your hardware with a serial port.

If you’re interested in hardware but haven’t had a chance to play with any yet, this one’s for you. In this “hello world”-level reversing project we’re checking out a UART (serial port) and using it to access a shell on a *very* soft target. If you decide to try it on your own you’ll find an equipment list, walkthrough references, and some troubleshooting ideas at the link below.