PyTorch is definitely hot at the moment, especially with the recent 1.3 and 1.4 releases bringing a host of performance improvements and more developer-friendly support for mobile platforms.

Why should one choose to use PyTorch over any of the other frameworks like MXNet, Chainer, or TensorFlow?

Here are five reasons that add up to a strong case for PyTorch.

Due to the eager execution mode that PyTorch operates under, rather than the static execution graph of traditional TensorFlow (yes, TensorFlow 2.0 does offer eager execution, but it’s a touch clunky at times) it’s very easy to reason about your custom PyTorch classes, and you can dig into debugging with TensorBoard or standard Python techniques all the way from print() statements to generating flame graphs from stack trace samples. This all adds up to a very friendly welcome to those coming into deep learning from other data science frameworks such as Pandas or Scikit-learn.

tt ads