Commercially viable quantum computing could be here sooner than you think, thanks to a new innovation that shrinks quantum tech down onto a chip: a cryochip.

Seeker explains:

It seems like quantum computers will likely be a big part of our computing future—but getting them to do anything super useful has been famously difficult. Lots of new technologies are aiming to get commercially viable quantum computing here just a little bit faster, including one innovation that shrinks quantum technology down onto a chip.

Generally speaking, the more data you have, the better your machine learning model is going to be.

However, stockpiling vast amounts of data also carries a certain privacy, security, and regulatory risks.

With new privacy-preserving techniques, however, data scientists can move forward with their AI projects without putting privacy at risk.

To get the low down on privacy-preserving machine learning (PPML), we talked to Intel’s Casimir Wierzynski, a senior director in the office of the CTO in the company’s AI Platforms Group. Wierzynski leads Intel’s research efforts to “identify, synthesize, and incubate” emerging technologies for AI.

The need for on-device data analysis arises in cases where decisions based on data processing have to be made immediately.

For example, there may not be sufficient time for data to be transferred to back-end servers, or there is no connectivity at all.

Here’s a look at a few scenarios where this sort of localized compute will matter most.

Analyzing large amounts of data based on complex machine learning algorithms requires significant computational capabilities. Therefore, much processing of data takes place in on-premises data centers or cloud-based infrastructure. However, with the arrival of powerful, low-energy consumption Internet of Things devices, computations can now be executed on edge devices such as robots themselves. This has given rise to the era of deploying advanced machine learning methods such as convolutional neural networks, or CNNs, at the edges of the network for “edge-based” ML.

Lex Fridman interviews Jim Keller as part of his AI Podcast series.

Jim Keller is a legendary microprocessor engineer, having worked at AMD, Apple, Tesla, and now Intel. He’s known for his work on the AMD K7, K8, K12 and Zen microarchitectures, Apple A4, A5 processors, and co-author of the specifications for the x86-64 instruction set and HyperTransport interconnect. This conversation is part of the Artificial Intelligence podcast.

OUTLINE:
0:00 – Introduction
2:12 – Difference between a computer and a human brain
3:43 – Computer abstraction layers and parallelism
17:53 – If you run a program multiple times, do you always get the same answer?
20:43 – Building computers and teams of people
22:41 – Start from scratch every 5 years
30:05 – Moore’s law is not dead
55:47 – Is superintelligence the next layer of abstraction?
1:00:02 – Is the universe a computer?
1:03:00 – Ray Kurzweil and exponential improvement in technology
1:04:33 – Elon Musk and Tesla Autopilot
1:20:51 – Lessons from working with Elon Musk
1:28:33 – Existential threats from AI
1:32:38 – Happiness and the meaning of life

While the quantum computing age may be “just around the corner,” traditional computing is not going anywhere anytime soon.

In fact, innovation there is increasing to keep up the promise of Moore’s Law.

Engadget takes a look at the process behind making microchips faster.

Microchips are one of the most complicated objects humanity has created, packing billions of transistors into a chip only a few centimeters across. These transistors keep getting smaller and more efficient, and the current process to make chips is already astounding, requiring dozens of steps, fantastically complicated machines, and atomic-scale precision. But the current state of the art has reached its physical limits. The structures on a chip are now smaller than the wavelength of light used to make them, and any more progress will require a big change.

That change is EUV, a radically new way of making chips that uses super high energy UV light created from a complex process involving plasma and lasers. EUV will enable our devices to keep getting smaller, faster, and more efficient, but where the current process to make chips already feels like sci-fi technology, EUV feels like magic.

Here’s an interesting look at what the next decade holds for AI and why hardware is going to be a big part of it.

“What we see happening in the transition to now and toward 2020 is what I call the coming of age of deep learning,” says Singer, pictured below with an NNP-I chip, tells The Next Platform. “This is where the capabilities have been better understood, where many companies are starting to understand how this might be applicable to their particular line of business. There’s a whole new generation of data scientists and other professionals who understand the field, there’s an environment for developing new algorithms and new topologies for the deep learning frameworks. All those frameworks like TensorFlow and MXNet were not really in existence in 2015. It was all hand-tooled and so on. Now there are environments, there is a large cadre of people who are trained on that, there’s a better understanding of the mapping, there’s a better understanding of the data because it all depends on who is using the data and how to use the data.”

While this is technically a press release, there could be something to DarwinAI if it really can increase neural networks performance more than 1600%. We’ll have to keep an eye on this technology. 😉

“The complexity of deep neural networks makes them a challenge to build, run and use, especially in edge-based scenarios such as autonomous vehicles and mobile devices where power and computational resources are limited,” said Sheldon Fernandez, CEO of DarwinAI. “Our Generative Synthesis platform is a key technology in enabling AI at the edge – a fact bolstered and validated by Intel’s solution brief.”

Earth Day may have been a week ago, but the earth is important every day. Here’s an interesting look at how AI can help save the planet in this article focusing on Intel’s efforts in this space.

Since 2017, Intel has been collaborating with Parley for the Oceans* and a team of marine biologists to collect the mucus exhaled from whales when they surface to breathe, then utilizing AI technology to analyze indicators or the whale’s health in real time. By developing a greater understand what’s threatening their ecosystem, we can better protect whales. Learn more about the Parley* SnotBot and see behind the scenes with the team working on the project.