Ad

This is Part 2 of a four-part series that breaks up a talk that Seth Juarez gave at the Toronto AI Meetup. (Watch Part 1)

Index:

  • [00:13] Optimization (I explain calculus!!!)
  • [04:40] Gradient descent
  • [06:26] Perceptron (or linear models – we learned what these are in part 1 but I expound a bit more)
  • [07:04] Neural Networks (as an extension to linear models)
  • [09:28] Brief Review of TensorFlow
tt ads