LAWs, short for Lethal Autonomous Weapons may change the nature of warfare forever.

With the development of AI systems, drones that can autonomously find and eliminate a targeted individual are only years away not decades.

[..]

At this point, the issue is not with hardware but with software, meaning how fast new AI algorithms can be developed and be implemented for certain military purposes.

Artificial intelligence (AI) can help us do stuff like finding a specific photo in our photos app, or translating signs into another language.

What if we applied the same technology to really big problems in areas like healthcare?

Google’s Dr. Lily Peng describes her journey from medicine to technology and outlines the potential of AI in healthcare, describing how her team trained an AI algorithm to detect diabetic eye disease in medical images to help doctors in India prevent millions of people from getting blind. Dr. Lily Peng is a doctor by training and now works with a team of doctors, scientists, and engineers at Google Health who use AI for medical imaging, to increase the availability and accuracy of care. Some of her team’s recent work includes building models to detect diabetic eye disease, predict cardiovascular health factors, and identify breast and lung cancer.

Computers just got a lot better at mimicking human language. Researchers created computer programs that can write long passages of coherent, original text.

Language models like GPT-2, Grover, and CTRL create text passages that seem written by someone fluent in the language, but not in the truth. That AI field, Natural Language Processing (NLP), didn’t exactly set out to create a fake news machine. Rather, it’s the byproduct of a line of research into massive pretrained language models: Machine learning programs that store vast statistical maps of how we use our language. So far, the technology’s creative uses seem to outnumber its malicious ones. But it’s not difficult to imagine how these text-fakes could cause harm, especially as these models become widely shared and deployable by anyone with basic know-how.

Read more here: https://www.vox.com/recode/2020/3/4/21163743/ai-language-generation-fake-text-gpt2 

The danger of artificial intelligence isn’t that it’s going to rebel against us, but that it’s going to do exactly what we ask it to do, says AI researcher Janelle Shane.

From the video description:

Sharing the weird, sometimes alarming antics of AI algorithms as they try to solve human problems — like creating new ice cream flavors or recognizing cars on the road — Shane shows why AI doesn’t yet measure up to real brains.

Dani, a game developer, recently made a game and decided to train an AI to play it.

A couple of weeks ago I made a video “Making a Game in ONE Day (12 Hours)”, and today I’m trying to teach an A.I to play my game!

Basically I’m gonna use Neural Networks to make the A.I learn to play my game.

This is something I’ve always wanted to do, and I’m really happy I finally got around to do it. Some of the biggest inspirations for this is obviously carykh, Jabrils & Codebullet!