Winter is coming — an AI Winter that is.

If you’re not familiar with the term “AI Winter,” it refers to the period of time when AI innovation stalled due, in large part, to a lack of available processing power. This led to a lack of innovation, which then resulted in a lack of funding, which essentially froze work in the field.

AI Innovation & Computation

We like to think that we are immune to limitations of hardware. After all, your phone has millions of times more power than all of NASA had at their disposal during the Apollo 11 program. If your web server is under heavy load, just pony up more money to use the cloud to scale up. Access to computing power is generally not an issue.

However, at the cutting edges of research and development, we may be reaching the outer limits of what’s feasible, or even possible. Recently, MIT researchers sounded the alarm that “deep learning is approaching computational limits.

Sound alarmist? Maybe even far fetched?

Think again.

Consider this: the GPT-3 model has 175 billion parameters. Depending on whom you believe, it cost somewhere between $4 million and $12 million to train. Either way, that’s a lot of money.

However, it was money well spent. GPT-3 represents a milestone in the field of Natural Language Processing. Its immediate predecessor, GPT-2, could predict and generate text with uncanny human-like ability. It had 1.5 billion parameters and would likely pass the Turing Test. Imagine how well GPT-3 would do.

This Has Happened Before and It Will Happen Again

The Turing test, developed in 1950 by computing pioneer Alan Turing, was designed to a machine’s ability to demonstrate intelligence equivalent to, or indistinguishable from, human intelligence. The test consists of a human that would judge natural language conversations between a human and a machine that would respond with human-like responses.

It surprises many people to learn that AI has been around for decades. In fact, we are in the seventh decade of AI research. Since Alan Turing, there have been advances and, more importantly, stalls in AI research. They tend to come in cycles. The first was in the 1960s and 1970s, when among other reasons, DARPA cut funding to research programs not tied directly to “mission-oriented direct research.” The 1980s saw the rise of “expert systems” and early work into neural networks. However, the costs of such systems were prohibitive due to processing power constraints.

Sound familiar?

The longest AI Winter to date occurred between the late 1980s until the the early 2010s. Although one could make the argument that the early days of Big Data were part of this cycle and, therefore, could push back the hype cycle to the mid to late 2000s.

While actual AI research and innovation had stalled, science fiction AI blossomed. The Terminator, Star Trek: The Next Generation’s Commander Data, and, of course, Star Wars’ R2-D2 and C3PO were all pop culture mainstays.

Prolog Textbook.

My First Brush with AI

It was during this AI Winter that I had my first brush with AI. In the mid 90’s as a computer science student, I had the chance to take a class on AI with a noted researcher in the field.

He touted the wonders of a programming language called Prolog. To hear him tell it, this was the programming language and paradigm of the future. It was inevitable.

After working through various class projects, I kept waiting for the “big reveal” of Prolog’s innate intelligence. It never came. In fact, the final project ended up to be a case study in recursion, with a little bit of logical inference thrown in.

SkyNet it was not.

I finished the class deflated and thought that AI was merely science fiction. To be completely honest, this experience made me skeptical of when I heard of the advancements in AI. After seeing an early demo of computer vision at the DC Tech Fair in 2015, I was not impressed. Instead, I suspected that there must be some underwhelming explanation behind it all.

I could not have been more wrong.

A Quantum Leap Forward

Since 2015, I have done a much better job of keeping an open mind.

In that light, when whispers of “slowing innovation” in AI research started, I didn’t dismiss it right away. As luck would have it, later that month I was at a conference organized by Microsoft Research and attended a talk that included a look at why the future is quantum.

The presentation opened my eyes to a new model of computing, one that could radically change the world we live in by solving problems that we simply cannot with current tools.

Needless to say, I was excited. Immediately, I recorded a podcast episode about what I just saw. In it, you can hear the breathless excitement of someone who has just had an “aha” moment.

This excitement led me to explore more about quantum computing and, fortunately, there’s already a number of SDKs on the market to try out. Unfortunately, after firing up my first Q# project, I was not exactly clear on what to do next.

A New Way of Thinking

Quantum computing adds new logic gates and new types of algorithms. Algorithms, that as of now, require some familiarity with quantum physics. Imagine if you needed to know electrical engineering to write code. At one time, that was a prerequisite. We are just so far removed from the bits nowadays, we forget about all the underlying infrastructure.

Yes, Quantum Computing is coming and it will require new skills.

It will likely elevate physicists to rock star status just as data science did for statisticians. It may just avoid another AI Winter and, more importantly, change the world as we know it.

What do you think?