Post by Char Jackson Post by Gene Wirchenko
Post by Char Jackson
Exactly the same thing that happens when a human driver encounters such
a thing, except that the computer will analyze the situation and make a
decision much faster than humans can.
Will the decision be correct?
You can ask that question regardless of who or what is "behind the
wheel". Here in early 2018, I think the balance tips in favor of the
computer. As the months and next couple of years go by, I expect it to
tip overwhelmingly in that direction.
The street layout in the accident scene.Loading Image...
This is video from in-car. Released by the police. The player wrapper
on the Verge seems to work, while trying the twitter one using that
The woman hadn't just stepped off the media.
She was out in the middle of the fucking street when hit.
This wasn't "one step off median, clipped by car in left lane".
She was moving.
She has no retro-reflectors on the two bike wheels.
She has white sneakers.
The "Safety Driver" is looking down at something
in his lap, at the time of collision. Playing
with a Smart Phone ?
Sorry, but in this case, the computer loses.
After reviewing a number of research projects and company
claims on websites, this kind of case ("difficult conditions")
is handled by correlation. Multiple sensors combine under
noisy conditions, a "classifier" (which normally works pretty
well in demos), draws a box around "threats".
In this case, the fact the car doesn't react at all, suggests
a portion of the self driving system was "down". A sensor such
as radar, can't "see" stationary objects. She was moving. The
radar should have seen an object at 3MPH, which is not 0MPH
like the scenery. She was running, but she wasn't a track star.
There is delta_V between her and the scenery.
The Lidars vary. In a student Electrical Engineering class project,
they use a cheap Lidar, with only very few data points per scan.
The correlation between that Lidar and a camera in daylight, picked
out a number of humans correctly (standing next to cars, on the
side of the street). But it drew a box around the side of a couple
of cars, where no human was standing. That sample system (not intended
for commercial usage) had false positives. But, it was also
The Lidar on these cars, is 64 lasers (in infrared), collecting
a million data points per second. That scene would have been
lit up like a Christmas tree. But, Lidar has limited range.
(They're allowed to use more optical power, if the scanning lasers
run at 1550nm, which is more "eye safe" for the public on the street.)
For a non-reflective target (like the victim), the "range" is on
the order of 50 meters (150 feet). Cars can be detected from a longer
distance. The scanning assembly doesn't rotate that fast, so there's
potential response latency.
There's absolutely no sign in that video, that the Lidar (by
itself) saw anything. If the Lidar had info, and the vehicle camera
had picked her up on visual, the car would have swerved or braked
well before getting to her. It might still have hit her, but
not at 45MPH.
You would combine the Lidar "cloud of dots" ranging, with a second
sensor. In the video, the recording camera (which may not be the
camera used for driving), shows there wouldn't be a lot of visual.
There should be good detection at 50 meters (150 feet) for the
Lidar. The radar should have picked this up (as she was running).
The classifier used in the Electrical Engineering student project,
is able to pick up stationary people standing next to cars (so the
car functions as "interference" for the test).
I can only conclude from the info so far, that the Uber car tech
failed to meet objectives. And in clear dry (night time)
conditions. No raging rain storm. No snow storm. No forest
fire smoke. No fog. Just a clear night. Um, yikes!
If there was a tech failure, you'd think there would be
indicators in the cabin flashing or speaking, indicating
a portion of the system was non-functional. For a computer
crash, you could use watchdogs or majority voting. I have no idea
how redundancy is handled in these self-driving cars. Do they
have any ?
This is a Tesla Model S threat test carried out by a car owner.
The system worked. But you can tell from the test results, that
there is a certain degree of separation, between camera detected
events and radar detected events. The system doesn't seem to
be combining all the sensors in a classifier sense before acting.
It might be using a slightly simpler method. Some threats generate
a notification, others the car just seems to come to a stop.
Or, the car keeps a distance from the human until they clear off
the road surface. Because the road is narrow, the car is probably
not allowed to violate the center line in the road and drive around.
The Model S is not nearly as well equipped as the Uber. And
yet Musk thinks some day it will achieve the higher Level
ratings of the other cars. "It's just a software update."