Lex Fridman interviews Jitendra Malik, a professor at Berkeley and one of the seminal figures in the field of computer vision, the kind before the deep learning revolution, and the kind after.

He has been cited over 180,000 times and has mentored many world-class researchers in computer science. This conversation is part of the Artificial Intelligence podcast.

Content index:

  • 0:00 – Introduction
  • 3:17 – Computer vision is hard
  • 10:05 – Tesla Autopilot
  • 21:20 – Human brain vs computers
  • 23:14 – The general problem of computer vision
  • 29:09 – Images vs video in computer vision
  • 37:47 – Benchmarks in computer vision
  • 40:06 – Active learning
  • 45:34 – From pixels to semantics
  • 52:47 – Semantic segmentation
  • 57:05 – The three R’s of computer vision
  • 1:02:52 – End-to-end learning in computer vision
  • 1:04:24 – 6 lessons we can learn from children
  • 1:08:36 – Vision and language
  • 1:12:30 – Turing test
  • 1:16:17 – Open problems in computer vision
  • 1:24:49 – AGI
  • 1:35:47 – Pick the right problem

Earlier this week, I added my 61st certification to my LinkedIn profile. This relentless push towards certification was to transition from developer to data scientist.

The COVID-19 pandemic has transformed the job market and accelerated the demand for all kinds of technical skills. Many people are looking to reskill and change careers.

Here’s a report on the impact of the pandemic on jobs, skills and certifications based on data from the firm’s latest reports IT Skills Demand and Pay Trends Report and the IT Skills and Certifications Volatility Index.

For example, edge computing for the Internet of Things (IoT), security control at the device level, and “hyperautomation“/robotic process automation are just some of the areas of expertise that are hot and getting hotter due to pandemic-induced changes in consumer habits, growing security threats and industry realignments, according to David Foote, chief analyst with Foote Partners, LLC. Meanwhile, formerly hot specialties such as data privacy and carbon-reducing technology have seen slackening demand.

Change Data Capture (CDC) is a typical use case in Real-Time Data Warehousing. It tracks the data change log (binlog) of a relational database (OLTP), and replay these change log timely to an external storage to do Real-Time OLAP, such as delta/kudu.

To implement a robust CDC streaming pipeline, lots of factors should be concerned, such as how to ensure data accuracy , how to process OLTP source schema changed, whether it is easy to build for variety databases with less code. This talk will share the practice for simplify CDC pipeline with SparkStreaming SQL and Delta Lake.

With applications ranging from classifying objects in self driving cars to identifying blood cells in healthcare industry to identifying defective items in manufacturing industry, image classification is one of the most important applications of computer vision.

How does it work? Which framework should you use?

Here’s a great tutorial.

In this article, we will understand how to build a basic image classification model in PyTorch and TensorFlow. We will start with a brief overview of both PyTorch and TensorFlow. And then we will take the benchmark MNIST handwritten digit classification dataset and build an image classification model using CNN (Convolutional Neural Network) in PyTorch and TensorFlow.

ANSYS is a leader in simulation world.

ANSYS Twin Builder and Microsoft Azure Digital Twins teams came together to integrate physics-based simulations with IoT Data. In this video, Olivier and Sameer Kher (Senior Director at Ansys) discuss the benefits of this joint solution.

The ANSYS Twin Builder combines the power of physics-based simulations and analytics-driven digital twins to provide real-time data transfer, reusable components, ultrafast modeling, and other tools that enable teams to perform “what-if” analyses, and build, validate, and deploy complex systems more easily.