Cruise, the self-driving subsidiary of General Motors, revealed its first vehicle to operate without a human driver, the Cruise Origin.

The vehicle, which lacks a steering wheel and pedals, is designed to be more spacious and passenger-friendly than typical self-driving cars.

Cruise says the electric vehicle will be deployed as part of a ride-hailing service, but declined to say when that might be. 

Samuel Arzt shows off a project where an AI learns to park a car in a parking lot in a 3D physics simulation.

The simulation was implemented using Unity’s ML-Agents framework (https://unity3d.com/machine-learning).

From the video description:

The AI consists of a deep Neural Network with 3 hidden layers of 128 neurons each. It is trained with the Proximal Policy Optimization (PPO) algorithm, which is a Reinforcement Learning approach.

Last night, I was involved in a car accident when a car in front of me stopped suddenly. I reacted as quickly as I could. I was banged up a little, but I was able to walk away from the accident a bit dizzy. (I had hit my head on the door frame.) Fortunately, my son fared better. The car, however, is likely totaled.

All of this got me thinking: could a self-driving car have fared better? I’m not 100% sure either way. On one hand, an AI based system would’ve detected the stopped car faster and applied the brakes a fraction of a second sooner. The only question is: would that have been enough? How would autonomous cars handle bad human drivers and their poor decisions?