TensorFlow developers interested in Reinforcement Learning (RL) may want to take a look at Huskarl. The framework was recently introduced in a Medium blog post and is meant for easy prototyping with deep-RL algorithms.
According to its creator, software engineer Daniel Salvadori, Huskarl “abstracts away the agent-environment interaction” in a similar way “to how TensorFlow abstracts away the management of computational graphs”. Under the hood it makes use of TensorFlow 2.0, naturally, and the tf.keras API. It is also implemented in a way that facilitates the parallelisation of computation of environment dynamics across CPU cores, to help in scenarios benefitting from multiple sources.