Is there a story here?
"Man in Van hits car at junction."
A Lexus fitted with Google's self-driving car tech was hit by a non-autonomous van that is said to have run a red light at a junction, according to local reports. The accident happened at 1.30pm on Saturday at the junction of El Camino Real and Calderon Avenue in Mountain View, just round the corner from Google's HQ. A Google …
Would a competent human in the Lexus have seen the van jumping the red and been able to take action to avoid the collision?
I've lost count of the number of times I've looked at other vehicles and thought 'he's not going to stop', or something similar and acted accordingly to prevent an accident.
You can't rely on humans to obey the rules 100% of the time, if only for the fact that we get distracted. Good luck trying to make autonomous cars deal with that. I think we're a few years away from that...
Would a competent human in the Lexus have seen the van jumping the red and been able to take action to avoid the collision?
Perhaps, although my own experience suggests not. I was recently involved in a similar type of collision - drove through the junction and noticed too late someone flying through the red light.
The other car hit me square on the driver-side door and knocked me unconscious before I had a chance to even attempt evasive action. The damage to the car I was driving was a bit worse than that pictured (other chap was going faster than the allowed 30mph) but close enough that I imagine the circumstances were also relatively similar.
Having said all of that, I can't really claim to be a competent human.
So you think he should operate the van from the passenger seat?
But seriously, Neanderthals weren't primitive or stupid. Or even that ugly. No idea how their driving skills would have been, though.
Would a competent human in the Lexus
You started your statement with an oxymoron. So, err..., what was your point?
OK, jokes aside, we have done it quite a few times. It is normal for any experienced driver to go to yellow alert just seeing specific vehicles/models approaching. These differ depending on geography.
For example, if you are in Eastern Europe and you see any of BMW, Audi or Mercedes you pretty much expect an imbecile which will overtake on a blind bend, does not have any seatbelts on and is yapping on the phone while doing all of that.
If, you move from Eastern Europe to let's say Kent, you can replace the Mercedes in the above list with a Lexus. Seatbelts are now present, but the "competent driver" is still yapping on the phone without a handsfree.
So the moment I see any one of these (with appropriate adjustments of the list for the location) I prepare myself and ensure that I have room to manoeuvre and avoid an accident.
"Every sane driver would KNOW that van wasnt going to stop at the red light."
Whilst I expect that there's a chance of someone running a red for a couple of seconds(*) into the green, the factor of the green having been there for 5-8 seconds means most people would have been tboned by bozo the vandriver. I'm betting the GooWagon was second or third back fro the line when the light turned green and virtually noone back from the leading vehicle in the train expects a opposing vehicle to enter an intersection like this once traffic's flowing.
(*) Most US intersections have the green come on as soon as the opposing side goes red. In most other countries there's a pause between the red and green - in the UK and some other countries that pause has the "red+orange" phase in it, which gives an extra second for red-runners to clear the intersection.
"Most US intersections have the green come on as soon as the opposing side goes red."
Actually, most intersection insert a second or two of all red before changing the other side to green. Those that change the instant the other side is red is rare and probably tend to have more T-bones because of cars with bad brakes coming onto the intersection at the judgment call area (right as the lines turn solid, right as the light turns yellow) and decide it's better to rush through than to try to stop and probably end up over the line and nailed for running red anyway.
As ane fule kno, hands free doesn't help.
People yapping on the phone don't prang the car because they've got one hand off the steering wheel. They prang the car because their mind isn't on the road. Hands-free phones don't fix that.
Every study on the subject shows this. But of course you can't sell so much electronic gear with this message, so good luck getting it recognised in law.
"For example, if you are in Eastern Europe and you see any of BMW, Audi or Mercedes you pretty much expect an imbecile which will overtake on a blind bend, does not have any seatbelts on and is yapping on the phone while doing all of that."
I think that is a global truth not just limited to Easter Europe as even in the US I've been that rings completely true.
Usually find lexus drivers are pretty poor too
"Would a competent human in the Lexus have seen the van jumping the red and been able to take action to avoid the collision?"
Possibly, but so would the AI. They're supposed to be capable of taking emergency evasive action if needed.
Coping with human stupidity is one of the things that makes automated vehicle design hard, as in "There are rules, but other cars don't always obey them" hard
So if all cars were autonomous, this accident would not happen.
Unless we fit the cars with side radar to look for approaching collisions...but what would be the best course of action? The person jumping the red light probably wasn't paying attention anyway....
The "dodgem" feature might be fun in a robot derby though.... like a full scale robot wars!
P.
"So if all cars were autonomous, this accident would not happen."
At first, this sounds like a reasonable assumption.
But let's follow the assumption one more step...
Once all cars are autonomous, and somehow therefore "accidents would not happen", then there's no need for speed limits. You can be carried to work, on city streets, at 450 kmh. Why not?
Are you still sure about the assumption that computers and software are suddenly infallible?
Isn't it odd that this amazing assumed milestone in the world of IT, which up to this point has been quite error prone, is strangely associated only with self-driving cars?
"So if all cars were autonomous, this accident would not happen."
At first, this sounds like a reasonable assumption.
But let's follow the assumption one more step...
Why take that step?
The requirement isn't that they should be perfect, but that they should be better than humanity. Frankly that is a scarily low threshold to beat.
You also have to account for other road users - and neither pedestrians, cyclists nor equestrians would deal well with cars travelling at silly speeds.
In fact the speed limit could be withdrawn for autonomous vehicles - because they could be programmed to stop in the distance they can see to be clear, even accounting for entrances/visual obstructions. This would be a naturally self limiting speed (most of the time well under the current speed limit, but significantly over it at 3am)
My worry is that the autonomous car will lack driving "sense".
You see someone next at a crossing - the lights might change soon.
A dog running along the pavement - might jump into the road.
A child with a ball - obvious trouble.
An uncertain cyclist - watch out for wobbles.
A white van man - you know they are on the phone, you know they will be distracted.
Suddenly hit by adverse light conditions so that you can't tell whether it's a white sky or the side of a lorry - slow down!
It's not just a question of driving correctly and monitoring other vehicles, it's about being aware of what's going on around you and anticipating problems.
"A.I. is hard."
The word 'hard' is a hilarious understatement. That's the subtle joke. That's why it's so funny.
My addendum is "...Especially in the real world."
I think it adds significant meaning. It's a subtle slap on the face of the AI industry, reminding them that their few hard-won successes (e.g. Watson) would not know how to open a door or cross a street.
Watson is a reclusive idiot. Not even allowed to 'drive slow on the driveway every Saturday'.
"A.I. is hard. Especially in the real world."
It's worth chiseling this into the lintel over the entrances of these R&D labs.
Remind the boys from Volvo not to stand in front.
> My worry is that the autonomous car will lack driving "sense".
Most new drivers lack "driving sense" and many never acquire it.
The thing about AI drivers is that once it's programmed in, ALL models using that algorithm have it.
An AI driver is looking in all directions all the time and isn't distracted by the legs on that girl across the road (A young driver spent too long looking at my wife's legs one day and hit a bus stop sign. This really does happen)
"The requirement isn't that they should be perfect, but that they should be better than humanity."
Except that's a trendy-sounding piece of utter bullshit. Autonomous cars will be simultaneously better AND worse than human drivers; assuming imperfect software, faults will not wait for that uber-specific set of circumstances that only Superman would have been able to avoid - no, when a fault strikes the car will just derp in some way, quite possibly in a situation that any idiot human could have easily avoided. And that's not going to change any time soon as long as any and all software we write is basically made up of bugs held together by bugs, as it is today.
John Robinson asked "Why take that step?"
Pssst! It's a 'reductio ad absurdum' argument.
It's intended to reveal that the incoming statement (phil dude: "So if all cars were autonomous, this accident would not happen.", perhaps even he was being ironic?) is not a reasonable claim.
If self-driving cars are perfect (= accidents "would not happen"), then it follows that speed limits can come off.
This small step reveals more clearly that the initial statement is nonsense.
That's why we take the small extra step, into the more-obviously absurd.
The other follow-on point is actually more interesting. Why is it that only the proponents of self-driving cars are under this delusion that such complex computer software will suddenly become perfect and bug free "in about five years" (quoting everyone)? It's an indefensibly naive assumption.
And apparently common testing ground for the little self-drivers.
http://www.theregister.co.uk/2016/02/29/alphabet_av_backs_into_bendybus_in_california/
Another accident Google's car had on El Camino Real. Of course, neither accident seems particularly attributable to the self-driving vehicle. And, as always, if they learn from this accident, it means every car will benefit from that "knowledge."
Oh, I thought you were describing Google's next innovation.
In a crash which their car's automation determines is the other party's fault, the Google Crash Spike [tm] deploys towards the incoming car's windscreen. This impales the offending driver, thus making driving safer for everyone, by both removing them from the roads, and the genepool.
A Google statement given to ABC 7 News said: “Human error plays a role in 94 percent of these crashes, which is why we're developing fully self-driving technology to make our roads safer.”
".. Besides, if we screw up, we own the search engine most people use. The report will never be seen again."
Any half competent driver - especially those who've experienced this or had near misses - will ALWYAS look left and right again, green light or not. Some people treat traffic lights as merely advisory, and it's going to be fun trying to accommodate for that in a driverless car..
Unless, of course, they build these things from the perspective they can get non-automated driving banned, and I think we're at least a decade away from that - if ever.
"Any half competent driver - especially those who've experienced this or had near misses - will ALWYAS look left and right again, green light or not. Some people treat traffic lights as merely advisory, and it's going to be fun trying to accommodate for that in a driverless car.."
Lets say some vehicles in front of you (maybe 10 or more) go through the green light. Do you ALWAYS check left and right even when it appears to be safe? I know I do most of the time (97% of the time), but occasionally I don't check when from a distant I can see that traffic was already stopped.
The Google car automatically contacted all required services in order of priority. First was a no-win-no-fee lawyer who takes out many online adverts. Next came an auto repair place that pays Google for a priority search listing. Then a funeral home. Ambulance services, having no commercial value to Google was eventually sent a pop-up ad for medical supplies with a short message appended detailing the need to collect an accident victim.
"Human error plays a role in 94 percent of these crashes"
Oh come ON - no-ones rolled out the famous line from 2001??
Dave: How would you account for this discrepancy between you and the twin 9000?
HAL: Well, I don’t think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error.
Frank: Listen HAL. There has never been any instance at all of a computer error occurring in the 9000 series, has there?
HAL: None whatsoever, Frank. The 9000 series has a perfect operational record.
Frank: Well of course I know all the wonderful achievements of the 9000 series, but, uh, are you certain there has never been any case of even the most insignificant computer error?
HAL: None whatsoever, Frank. Quite honestly, I wouldn’t worry myself about that.
Is one of those Lexus that can drive a bit by itself. She has had THREE accidents this year (some not her fault). So, if Google wants to have a sucker participant to further improve car driving (who will sound the horn an any opportunity) I can easily volunteer her. I just got the 6 month insurance bill at over $1200. Bummer!
Is one of those Lexus that can drive a bit by itself. She has had THREE accidents this year (some not her fault). So, if Google wants to have a sucker participant to further improve car driving (who will sound the horn an any opportunity) I can easily volunteer her. I just got the 6 month insurance bill at over $1200. Bummer!
With the present state of self driving it'll be the equivalent of getting her a Ford Pinto with Firestone tyres..
Hang on, I thought that autonomous vehicules had lidar and stuff monitoring the environment all around and calculating velocity and direction? The googlexus apparently braked and was hit square in the right hand side doors (front wing undamaged), so at what point did it realise it was going to be hit? Why did it brake rather than, say, swerve left whilst accelerating (which would minimise impact speed)? Why did it's reaction result in the passenger compartment taking the impact ? I thought the whole point of autonomous vehicules was their awesome situational awareness and lightning reflexes. Googlexus had time to respond and managed to get the worst-case collision.
I read in another report a quote from Google that said. "The Google car hit the brakes automatically on seeing the other car crossing the red light, followed by the human behind the wheel doing the same, but it wasn’t enough to prevent the collision"
Given the car was hit broadside and the braking occurred prior to impact does make me wonder if 'not braking' or even 'accelerating' may have prevented the impact. Instead, this braking seems to have resulted in the vehicle being in the right (or wrong) place to get struck instead of being say 2-3 metres further ahead with the offending vehicle passing behind.
I suspect a switched on driver might have made such a 'counter-intuitive' move. Accelerating (maybe beyond the speed limit) as a collision avoidance tactic isn't I'm guessing in the car in question's repertoire.
"I suspect a switched on driver might have made such a 'counter-intuitive' move. Accelerating (maybe beyond the speed limit) as a collision avoidance tactic isn't I'm guessing in the car in question's repertoire."
Speeding up may not have been an option. Suppose there was another car ahead of the Google car, meaning it was blocked from dashing forward?
It's an intersection, one of the few places where cars naturally tend to bunch up because they're speeding up or slowing down. Besides, to avoid the crossing vehicle would probably require more than a car length of acceleration room, and there's little hope of avoiding the accident if it (or any car, for that matter) was "boxed in".
I was a few metres away at the time - it was quite a crash. The junction is on a road that leads directly off the freeway, and I'm told it's quite common for cars to speed across that junction at near-freeway speeds. So I'm not surprised that the an accident has happened there; maybe if the G car had stepped off the lights smartly, the van might have gone behind it.
The human system works because we correct each others' errors. I frequently adjust for other drivers' moves that are dangerous mistakes, and I know others have done the same for me - obviously more times than I can know.
I wonder how effectively the robot does this. Will we need to go to robot-only roads to each a real safety improvement?