Quote:
Originally Posted by NeilBlanchard
The autonomous cars I have seen have about 8 different sensors - that is a LOT of data. It all has to be integrated, to construct an accurate 3-dimensional picture of the world.
Each of those sensors is subject to it's own challenges: weather, obstructions, etc.
The overall picture then has to be interpreted - this will take a STAGGERING amount of computing power. Think of a gaming computer - that is simply displaying a largely canned 3D model. Scanning the space around a moving car is MUCH harder than a state-of-the-art game.
Making sense of all the data - and then "deciding" what is important, or what COULD be important - is a Herculean task.
|
Yes, which is why Tesla partnered with Nvidia to tackle the computing challenges. There is no limit to the amount of processing power and sensory information that would improve autonomous driving, but there is a certain level that is good enough.
Humans have similar issues with perception/calculation/execution that computers face. The difference is that the car can be augmented with sensors that detect things not detectable by humans, such as infrared and ultrasonic. At first autonomous features will be a supplement to human capabilities, and eventually it will surpass human capabilities. Computers were worse at chess, go, and Jeopardy than human competitors... until they weren't. Likewise, computers will be worse drivers than humans until they are better.
https://electrek.co/2017/08/09/tesla...omous-driving/