Kaggle tests out different automatic machine learning libraries on this live stream coding session.
Jabrils uses an autoencoder to generate a new Pokemon.
Deep learning has had enormous success on perceptual tasks but still struggles in providing a model for inference. Here’s an interesting talk about making neural networks that can reason.
To address this gap, we have been developing networks that support memory, attention, composition, and reasoning. Our MACnet and NSM designs provide a strong prior for explicitly iterative reasoning, enabling them to learn explainable, structured reasoning, as well as achieve good generalization from a modest amount of data. The Neural State Machine (NSM) design also emphasizes the use of a more symbolic form of internal computation, represented as attention over symbols, which have distributed representations. Such designs impose structural priors on the operation of networks and encourage certain kinds of modularity and generalization. We demonstrate the models’ strength, robustness, and data efficiency on the CLEVR dataset for visual reasoning (Johnson et al. 2016), VQA-CP, which emphasizes disentanglement (Agrawal et al. 2018), and our own GQA (Hudson and Manning 2019). Joint work with Drew Hudson.
In this video Siraj Raval announces that School of AI is now accepting applications for research fellows in 2019.
From the video description:
We’ll select 10 Fellows and give them 1K USD in Google Cloud credits each, a personal advisor, and help them submit their work to relevant academic outlets like NIPS and popular journals. The deadline for submissions is May 15 2019 and I look forward to your applications! Our 10 Fellows from 2018 did some amazing work, I’ll explain what they did and give guidelines as to what we’re looking for this round. Enjoy!
Application form: https://forms.gle/dJmnNkKPvjzWWJ9L9
Here’s an interesting news article from MIT that could revolutionize NLP and further NLU (Natural Language Understanding).
Children learn language by observing their environment, listening to the people around them, and connecting the dots between what they see and hear. Among other things, this helps children establish their language’s word order, such as where subjects and verbs fall in a sentence. In computing, learning language is […]
In this video, Siraj Raval asks a series of questions of Bryan Catanzaro, the Vice President of Applied Deep Learning Research at NVIDIA.
With data storage demands increasing every day, conventional storage will not be enough in the future. Enter DNA-based storage, with its ability to store information on a molecular level, it could revolutionize data storage in the age beyond big data. And researchers have recently came one step closer to making this technology real.
Researchers at Microsoft and the late Microsoft founder Paul Allen’s school of computing science at the University of Washington has built a system of liquids, tubes, syringes, and electronics around a benchtop to deliver the world’s first automated DNA storage device.
Researchers at IBM have drafted some new algorithms designed specifically to take advantage of quantum computers’ unique properties. The only catch is that we still need to build the computer.
While designing algorithms before the computers themselves may sound backwards, this has happened before. Computational models for conventional computers date back to the 1800s when Charles Babbage and Ada Lovelace were pondering mechanical computing devices.
From the article:
“We’ve developed a blueprint with new quantum data classification algorithms and feature maps. That’s important for AI because, the larger and more diverse a data set is, the more difficult it is to separate that data out into meaningful classes for training a machine learning algorithm. Bad classification results from the machine learning process could introduce undesirable results; for example, impairing a medical device’s ability to identify cancer cells based on mammography data.”
IBM has come up with a way to use quantum computers to improve machine learning algorithms, even though we don’t have anything approaching a quantum computer yet. The tech giant developed and tested a quantum algorithm for machine learning with scientists from Oxford University and MIT, showing how quantum […]
Two Minute Papers explores three papers about what makes for a good image generation AI using GANs.
Storytelling is at the heart of human nature and Natural Language Processing is a field that is driving a revolution in the computer-human interaction. That is what makes AI Pix2Story so fascinating. Watch and see see how to teach an AI to be creative, be inspired by a picture and take it to another level.
AI Lab Pix2Story link
Pix2Story Azure Website link