Yannic Kilcher covers a paper where Geoffrey Hinton describes GLOM, a Computer Vision model that combines transformers, neural fields, contrastive learning, capsule networks, denoising autoencoders and RNNs.

GLOM decomposes an image into a parse tree of objects and their parts. However, unlike previous systems, the parse tree is constructed dynamically and differently for each input, without changing the underlying neural network. This is done by a multi-step consensus algorithm that runs over different levels of abstraction at each location of an image simultaneously. GLOM is just an idea for now but suggests a radically new approach to AI visual scene understanding.

TensorFlow 2.0 is all about ease of use, and there has never been a better time to get started.

In this talk, learn about model-building styles for beginners and experts, including the Sequential, Functional, and Subclassing APIs.

We will share complete, end-to-end code examples in each style, covering topics from “Hello World” all the way up to advanced examples. At the end, we will point you to educational resources you can use to learn more.

Presented by: Josh Gordon

View the website → https://goo.gle/36smBfW

O’Reilly and TensorFlow teamed up to present the first TensorFlow World last week.

It brought together the growing TensorFlow community to learn from each other and explore new ideas, techniques, and approaches in deep and machine learning.

Presenters in the keynote:

  • Jeff Dean, Google
  • Megan Kacholia, Google
  • Frederick Reiss, IBM
  • Theodore Summe, Twitter
  • Craig Wiley, Google
  • Kemal El Moujahid, Google