Winter is coming — an AI Winter that is.

If you’re not familiar with the term “AI Winter,” it refers to the period of time when AI innovation stalled due, in large part, to a lack of available processing power. This led to a lack of innovation, which then resulted in a lack of funding, which essentially froze work in the field.

AI Innovation & Computation

We like to think that we are immune to limitations of hardware. After all, your phone has millions of times more power than all of NASA had at their disposal during the Apollo 11 program. If your web server is under heavy load, just pony up more money to use the cloud to scale up. Access to computing power is generally not an issue.

However, at the cutting edges of research and development, we may be reaching the outer limits of what’s feasible, or even possible. Recently, MIT researchers sounded the alarm that “deep learning is approaching computational limits.

Sound alarmist? Maybe even far fetched?

Think again.

Consider this: the GPT-3 model has 175 billion parameters. Depending on whom you believe, it cost somewhere between $4 million and $12 million to train. Either way, that’s a lot of money.

However, it was money well spent. GPT-3 represents a milestone in the field of Natural Language Processing. Its immediate predecessor, GPT-2, could predict and generate text with uncanny human-like ability. It had 1.5 billion parameters and would likely pass the Turing Test. Imagine how well GPT-3 would do.

This Has Happened Before and It Will Happen Again

The Turing test, developed in 1950 by computing pioneer Alan Turing, was designed to a machine’s ability to demonstrate intelligence equivalent to, or indistinguishable from, human intelligence. The test consists of a human that would judge natural language conversations between a human and a machine that would respond with human-like responses.

It surprises many people to learn that AI has been around for decades. In fact, we are in the seventh decade of AI research. Since Alan Turing, there have been advances and, more importantly, stalls in AI research. They tend to come in cycles. The first was in the 1960s and 1970s, when among other reasons, DARPA cut funding to research programs not tied directly to “mission-oriented direct research.” The 1980s saw the rise of “expert systems” and early work into neural networks. However, the costs of such systems were prohibitive due to processing power constraints.

Sound familiar?

The longest AI Winter to date occurred between the late 1980s until the the early 2010s. Although one could make the argument that the early days of Big Data were part of this cycle and, therefore, could push back the hype cycle to the mid to late 2000s.

While actual AI research and innovation had stalled, science fiction AI blossomed. The Terminator, Star Trek: The Next Generation’s Commander Data, and, of course, Star Wars’ R2-D2 and C3PO were all pop culture mainstays.

Prolog Textbook.

My First Brush with AI

It was during this AI Winter that I had my first brush with AI. In the mid 90’s as a computer science student, I had the chance to take a class on AI with a noted researcher in the field.

He touted the wonders of a programming language called Prolog. To hear him tell it, this was the programming language and paradigm of the future. It was inevitable.

After working through various class projects, I kept waiting for the “big reveal” of Prolog’s innate intelligence. It never came. In fact, the final project ended up to be a case study in recursion, with a little bit of logical inference thrown in.

SkyNet it was not.

I finished the class deflated and thought that AI was merely science fiction. To be completely honest, this experience made me skeptical of when I heard of the advancements in AI. After seeing an early demo of computer vision at the DC Tech Fair in 2015, I was not impressed. Instead, I suspected that there must be some underwhelming explanation behind it all.

I could not have been more wrong.

A Quantum Leap Forward

Since 2015, I have done a much better job of keeping an open mind.

In that light, when whispers of “slowing innovation” in AI research started, I didn’t dismiss it right away. As luck would have it, later that month I was at a conference organized by Microsoft Research and attended a talk that included a look at why the future is quantum.

The presentation opened my eyes to a new model of computing, one that could radically change the world we live in by solving problems that we simply cannot with current tools.

Needless to say, I was excited. Immediately, I recorded a podcast episode about what I just saw. In it, you can hear the breathless excitement of someone who has just had an “aha” moment.

This excitement led me to explore more about quantum computing and, fortunately, there’s already a number of SDKs on the market to try out. Unfortunately, after firing up my first Q# project, I was not exactly clear on what to do next.

A New Way of Thinking

Quantum computing adds new logic gates and new types of algorithms. Algorithms, that as of now, require some familiarity with quantum physics. Imagine if you needed to know electrical engineering to write code. At one time, that was a prerequisite. We are just so far removed from the bits nowadays, we forget about all the underlying infrastructure.

Yes, Quantum Computing is coming and it will require new skills.

It will likely elevate physicists to rock star status just as data science did for statisticians. It may just avoid another AI Winter and, more importantly, change the world as we know it.

What do you think?

Scientists might have reached the theoretical limit of how strong this particular material can get, designing the first-ever super-light carbon nanostructure that’s stronger than diamond.

The latest development in the nanoworld of carbon comes from a team that has designed something called carbon plate-nanolattices. Under a scanning electron microscope, they look like little cubes, and the math indicated that this structure would be incredibly strong, but it’s been too difficult to actually make, until now.

The team’s success was made possible by a 3D printing process called two-photon polymerization direct laser writing, which is essentially 3D printing on the level of atoms and photons.

Find out more about this technique and what the result could mean for the future of medicine, electronics aerospace and more in this Elements.

This Seeker video explains.

Artificial intelligence (AI) can help us do stuff like finding a specific photo in our photos app, or translating signs into another language.

What if we applied the same technology to really big problems in areas like healthcare?

Google’s Dr. Lily Peng describes her journey from medicine to technology and outlines the potential of AI in healthcare, describing how her team trained an AI algorithm to detect diabetic eye disease in medical images to help doctors in India prevent millions of people from getting blind. Dr. Lily Peng is a doctor by training and now works with a team of doctors, scientists, and engineers at Google Health who use AI for medical imaging, to increase the availability and accuracy of care. Some of her team’s recent work includes building models to detect diabetic eye disease, predict cardiovascular health factors, and identify breast and lung cancer.

Commercially viable quantum computing could be here sooner than you think, thanks to a new innovation that shrinks quantum tech down onto a chip: a cryochip.

Seeker explains:

It seems like quantum computers will likely be a big part of our computing future—but getting them to do anything super useful has been famously difficult. Lots of new technologies are aiming to get commercially viable quantum computing here just a little bit faster, including one innovation that shrinks quantum technology down onto a chip.

A quantum computer isn’t just a more powerful version of the computers we use today; it’s something else entirely, based on emerging scientific understanding — and more than a bit of uncertainty.

Enter the quantum wonderland with TED Fellow Shohini Ghose and learn how this technology holds the potential to transform medicine, create unbreakable encryption and even teleport information.

Can’t get enough? Here’s another video.

Lloyd Danzig, a leading expert in the field of Artificial Intelligence, explores ethical issues of automation. Lloyd is the Chairman & Founder of the International Consortium for the Ethical Development of Artificial Intelligence, a non-profit NGO dedicated to ensuring that rapid developments in A.I. are made with a keen eye toward the long-term interests of humanity.

He is a distinguished member of CompTIA’s AI Advisory Council, through which the world’s 20 most influential thought leaders establish best practices to foster technological development while protecting consumers.

Jamie Paik explores more creative forms of robots.

From the video description:

Taking design cues from origami, robotician Jamie Paik and her team created “robogamis”: folding robots made out super-thin materials that can reshape and transform themselves. In this talk and tech demo, Paik shows how robogamis could adapt to achieve a variety of tasks on earth (or in space) and demonstrates how they roll, jump, catapult like a slingshot and even pulse like a beating heart.

Alison McCauley gives a very thought provoking presentation at OSCON. Can blockchain really be this good for humanity?

In a world of increasingly complex challenges, the accelerated innovation of open source development is more urgent than ever. But nobody knows if it’s enough. Join Alison McCauley to learn how blockchain technology offers new tools that could help extend the ethos of open innovation into new areas.