In case you thought that the AI market was cooling off, have a look at this. (Emphasis added)

Based on the component, the market has been divided into hardware, software, and services. The software segment accounted for the significant share of the market in 2018 due to the high adoption of cloud-based software. This can be attributed to improved cloud infrastructure and hosting parameters.

However, the hardware segment is anticipated to observe the fastest growth rate during the forecast period. This is due to the growing demand for hardware optimized for machine learning, an increasing number of hardware providers, and technological development such as customized silicon chips with machine learning and artificial intelligence capabilities.

Yes, hardware is going to become more and more important to AI.

The edge is all about putting intelligence closer to the source of input, but these intelligent algorithms must be squeezed into ever tinier form factors.

Information Week has an interesting article on the challenges that face edge computing.

Developers of artificial intelligence (AI) applications must make sure that each new machine learning (ML) model they build is optimized for fast inferencing on one or more target platforms. Increasingly, these target environments are edge devices such as smartphones, smart cameras, drones, and embedded appliances, many of which have severely constrained processing, memory, storage, and other local hardware resources.

Lex Fridman interviews Jim Keller as part of his AI Podcast series.

Jim Keller is a legendary microprocessor engineer, having worked at AMD, Apple, Tesla, and now Intel. He’s known for his work on the AMD K7, K8, K12 and Zen microarchitectures, Apple A4, A5 processors, and co-author of the specifications for the x86-64 instruction set and HyperTransport interconnect. This conversation is part of the Artificial Intelligence podcast.

OUTLINE:
0:00 – Introduction
2:12 – Difference between a computer and a human brain
3:43 – Computer abstraction layers and parallelism
17:53 – If you run a program multiple times, do you always get the same answer?
20:43 – Building computers and teams of people
22:41 – Start from scratch every 5 years
30:05 – Moore’s law is not dead
55:47 – Is superintelligence the next layer of abstraction?
1:00:02 – Is the universe a computer?
1:03:00 – Ray Kurzweil and exponential improvement in technology
1:04:33 – Elon Musk and Tesla Autopilot
1:20:51 – Lessons from working with Elon Musk
1:28:33 – Existential threats from AI
1:32:38 – Happiness and the meaning of life

Over the next decade, data analytics and AI will augment workers’ efficiency, as companies rely on leading tech to beat out competitors, according to Gartner’s Hype Cycle 2019.

There’s a huge gap, however, in the skills needed to survive and thrive in an AI powered workplace and economy.

Are you ready? Is your company ready?

However, given the speed of change, most organizations aren’t ready for AI. According to the 2019 Deloitte Global Human Capital Trends, 65% of leaders cited AI and robotics as an important or very important issue in human capital. Only 26% of surveyed organizations are ready or very ready to address the impact of these technologies.

Lex Fridman interviews David Chalmers in this thought provoking interview on consciousness.

David Chalmers is a philosopher and cognitive scientist specializing in philosophy of mind, philosophy of language, and consciousness. He is perhaps best known for formulating the hard problem of consciousness which could be stated as “why does the feeling which accompanies awareness of sensory information exist at all?” This conversation is part of the Artificial Intelligence podcast.

OUTLINE:
0:00 – Introduction
2:23 – Nature of reality: Are we living in a simulation?
19:19 – Consciousness in virtual reality
27:46 – Music-color synesthesia
31:40 – What is consciousness?
51:25 – Consciousness and the meaning of life
57:33 – Philosophical zombies
1:01:38 – Creating the illusion of consciousness
1:07:03 – Conversation with a clone
1:11:35 – Free will
1:16:35 – Meta-problem of consciousness
1:18:40 – Is reality an illusion?
1:20:53 – Descartes’ evil demon
1:23:20 – Does AGI need conscioussness?
1:33:47 – Exciting future
1:35:32 – Immortality

Siraj Raval explores why does a computer algorithm classify an image the way that it does? This is a question that is critical when it comes to AI applied to diagnostics, driving, or any other form of critical decision making.

In this video, he raises awareness around one technique in particular that I found called “Grad-Cam” or Gradient Class Activation Mappings.

The Bot Framework Composer is an integrated development tool for developers and multi-disciplinary teams to build bots and conversational experiences with the Microsoft Bot Framework.

In this episode of AI show, Seth Juarez is joined by Vishwac Sena Kannan, Program Manager for Bot Framework to introduce and demo Bot Framework Composer. Visit https://aka.ms/BotFrameworkComp to get started.

Index:
[00:47] – Introduction and overview
[01:45] – Demo – Creating a new bot with Bot Framework Composer
[02:25] – Walkthrough – local bot runtime
[03:30] – Demo – triggers, actions
[05:06] – Language generation integration
[06:08] – Sample bot with Language understanding (LUIS)
[09:00] – Handling interruptions
[11:10] – Wrap up

By optimizing BERT for CPU, Microsoft has made inferencing affordable and cost-effective.

According to the published benchmark, BERT inferencing based on an Azure Standard F16s_v2 CPU takes only 9ms which translates to a 17x increase in speed.

Microsoft partnered with NVIDIA to optimize BERT for GPUs powering the Azure NV6 Virtual Machines. The optimization included rewriting and implementing the neural network in TensorRT C++ APIs based on CUDA and CUBLAS libraries. The NV6 family of Azure VMs is powered by NVIDIA Tesla M60 GPUs. Microsoft claims that the improved Bing search platform running on the optimized model on NVIDIA GPUs serves more than one million BERT inferences per second within Bing’s latency limits.