Member-only story
Musk Never Said to Only Use Vision for Automated Driving
LIDAR without Color Vision is vastly worse than Color Vision without LIDAR
As one of the earliest successful people to use neural network processing, I also worked on problems in self-driving. Here are points:
Feature Extraction from Sense Data
In the 1980s, I argued that when the engineers said they only needed ten sensory inputs per second as discrete semi-independent feature inputs. it should be 100,000 per second or more.
The human brain, actually any animal brain, uses highly perfected intuition on millions of sensory and sensory derived sources, not elegant math, and I was proven right.
For any moment t, many such features will be wrongly interpreted.
Redundancy in practical machine recognition and action is essential.
This is how humans work. We have ears for odd sounds, motion sensors, and, most importantly, we can talk and thereby contextualize our environment.
Somebody in the car can scream that they cannot see the road, or use enforced moderated dialogue with the car on conditions they are perceiving.