How many billions of dollars are being spent chasing this?
Wouldn't it be much more efficient to simply teach people how to drive properly in the first place?
Self-driving cars won’t learn to drive well if they only copy human behaviour, according to Waymo. Sure, people behind the wheel are more susceptible to things like road rage or the urge to run red lights but they’re pretty good drivers overall. When researchers from Waymo, the self-driving arm under Google’s parent company …
"In shock news, AI doesn't adequately map a solution space without being given sufficient training data representative of that solution space."
And this is why I'm not particularly worried about AI taking over anytime soon. A driver might not be trained in every situation but can generalise and extrapolate (ok, some not so much). A neural net only knows what its been taught and can only extrapolate to a small extent. A better example is image recognition - a net has to be shown thousands, perhaps millions of pictures of dogs at all different angles before it can recognise a dog with reasonable accuracy. A baby only has to be shown a dog once or twice and it knows "dog". Until ANNs can do this then they're little more than useful but dumb statistical recognisers.
millions of pictures of dogs at all different angles before it can recognise a dog with reasonable accuracy. A baby only has to be shown a dog once or twice and it knows "dog"
Unfortunately I only seem to get quizzed about buses, store fronts, traffic lights, "crosswalks" and fire hydrants whenever I have to prove I'm not a robot. I assume they'll get around to "people", "dogs" and "babies" at some stage - until then, stay off the streets!
A baby only has to be shown a dog once or twice and it knows "dog".
At what age? It takes months for babies to be able to do anything non-instinctive. There do seem to be some hardwired behaviour such as identifying and watching facing but cognitive processing takes years. Oh, and there are lots of examples of how easy it is to fool human cognitive processing precisely because it depends upon some of the shortcuts we see in some machine learning.
> At what age?
About 2.
My 2 year old saw a dear for the first time ever, while we were all looking the other way, and immediately said "goat!"
Which, considering she'd only ever seen goats once, about 3 months before, was a pretty good extrapolation, and one that no computer model I'm aware of could currently match.
So I guess "baby" is an exaggeration, but "infant" or "toddler" would be more accurate.
I just typed 'sheep' into my google photos app. I have never asked it to search my photos for that before. Not only did it find pictures of sheep, some were in the far background, others were cartoon characters on a mug.
Pretty good for a device that has never been asked about one before.
Ahh... but they will have machine learning so will know what a sheep looks like from millions of captioned photos!
Yes, exactly the good thing about the rise of the machine is that they can instantly learn from each other and get new knowledge at the same time. They are simultaneously learning form thousands or millions of hours of experience every day. Every person needs to re-learn for each one born. Every person has to go through a driving test and get instructed and read the highway code and learn from experience getting very different end results in the process.When they go abroad they need to learn new rules (and driving styles) as they go.
A machine can do it instantly, new road regulations could (in theory) come into force with a few days notice - if it was a simple rule, rather than relying on a big publicity campaign.
Assuming limited failures, once learnet they will never be influenced by tiredness, music, a bad day, stress, etc.
While all this "congnitive processing takes years" sounds good in theory, you have to remember, your child is learning from day 1.
First day they are learning how to see. Like, literally what is up, down, left, right of the visual feed, and how to map it to movements of the eyes and/or rest of the body. It may mean the other learning and processing things look to kick in later, but in reality it's all being built on and processed from the beginning.
It is just that a child only gets the linguistic ability and the reward/risk feedback later on to actually do something about most of the more complex stuff. The training data for a self driving car may be "road signs, road markings", as a data set compared to what a toddler or newborn baby learns, is childsplay.
"t takes months for babies to be able to do anything non-instinctive."
Don't think that nothing's happening during that time. For one thing it's correlating what it can see with what it can touch and coming to understand the concept of solid objects. At that point it's achieved something that AI doesn't do. It might be one reason why the AI crashes into things as reported in TFA; it doesn't know that the car in front is solid because it doesn't understand solid (or anything else for that matter).
'shock news' heh.
from article: "Neural networks are notoriously data hungry; it takes them millions of demonstrations in order to learn a specific task."
Well, in theory, 'once learned' the concepts can be copied. But I suspect that using raw neural network learning is grossly inefficient.
Some things are intuitively obvious, like staying in the lane, stopping at a stop sign, and so on. Being able to recognize what a "lane" is and what a "stop sign" is should be solvable as separate problems.
But of course, there do not appear to be enough details as to how they're really going about this.
I see this, instead, as an opportunity to just "hard code" some basic rules in there, to avoid having to run a million simulations that come up with the same "conclusion" in the AI [and it'll probably RUN faster on the hardware]. So "Nice Try" to the AI people, who are probably being like the proverbial hammer seeing everything as a nail...
It's arguable that for at least the last 70 years most development around cars has been about improving safety and avoiding the problems caused by the meatware drivers. The costs, both financial and in lives, have been staggering.
While I don't think that autonomous vehicles will be suitable for everything, I do think that they will learn faster from their mistakes than each new generation of meatware.
to come up and test their shiny neurals here in Canada, in winter. Come on, guys, you want to replace humans driving ? Show us your programming skills! Oh, and while you are at it consider developing an algorithm for snow shoving and windshield ice grating the most popular winter sports around here.
Thinking about winter weather...
Being blown off of an icy road due to strong crosswinds, especially for high profile vehicles, might be a nice "anomoly" to add to their list
Then there are 'hydroplane' conditions when raining, which might require you to NOT make any sudden adjustments, even if you're outside of a lane. Or lets say you end up spinning anyway and need to recover from it.
It appears to me they're still working on 'fair weather' problems like a child running in front of the car, or someone drifting into your lane.
What would a self-driving car do when the traffic light is red and a police officer makes you a quick sign to go ahead overriding the traffic light ? Or the other way around, telling you to hold while the traffic light turns green ?
How would a self-driving car guess if a pedestrian has the intention to cross the street while he/she is still on the sidewalk ? What if the pedestrian on the sidewalk is in reality waiting for the bus ?
Just asking.
I got a telling off from the police for running a red (at 5MPH, in the middle of nowhere with no other traffic about, and I was already stationary) because there was a police car sat behind me with blues and twos roaring. I said to him something along the lines of "I thought you had something important to be getting to so I moved out of the way". To which the copper got red faced and frothing before jumping back in his car and speeding off.
It was an odd encounter
"they'd just decide among themselves, taking into account area policies and emergency requirements, who gets to cross the junction next, and what, if anything, other vehicles need to do in order to make that happen."
Ahh, I can guess what the BMW and Audi algorithm will be for that one!
"In the UK, at least from a legal perspective, the red light overrules the desire of the emergency services to get past you. There have been cases of people getting nicked for running a red to allow this."
Thats because in this case UK law is unfortunately an ass. Getting out of the way of an ambulance could literally mean the difference between life or death to a patient.
It's not about the emergency services. Around here the law says if there's simultaneously a working traffic light and a traffic agent actively directing traffic, the latter wins no matter what the lights say (and this happens all the time, whenever cops have nothing better to do I suppose). So yeah, it's quite on point asking whether an AI driver would recognize the often relatively subtle gestures those guys use to signal "your turn, get moving..."
"Just asking."
Self driving cars are just the next money making opportunity for the silicon valley sociopaths both in selling the tech and running services with wages saved due to no drivers. They don't care if it works 100%, they just care if they can make money out of it. Almost no one in the general public is asking for this tech and its almost certainly a lot way from being ready but that won't stop the sociopaths from persuading governments to license it and people to buy it.
It doesn't need to work 100%
As soon as it's at the same level as your average human, it becomes safer for 50%* of drivers to have it drive them.
We also don't have any data for the number of accidents avoided by self driving cars at this point due to the things that self driving cars are much better than humans at which, generally speaking, is the majority of situations. The point of this article is that the situations you need a human for are rare enough that we just don't have sufficient data - something telling in itself.
*50% defined by ability, not by how good people think they are
"As soon as it's at the same level as your average human, it becomes safer for 50%* of drivers to have it drive them."
So by "average" you mean median?
Your population from which you're taking your average includes a lot of young, inexperienced drivers. They pull the average down. With experience they'll get beyond average. What you're saying is that an autonomous car is good enough if it's at the same level as a driver with some experience but less good than a driver with a few years of safe driving behind them. No thanks.
Your population from which you're taking your average includes a lot of young, inexperienced drivers. They pull the average down. With experience they'll get beyond average.
But they'll be replaced by equally inexperienced drivers so the average is more or less constant. As for getting better… I think that is dependent upon the routine: we get skilled as the journeys we most regularly make.
I was always told that statistically, you're more likely to have an accident closer to home or somewhere you are familiar with. Certainly works for the 2 accidents I've had and most of the near misses in 40 odd years driving UK and Europe. Definitely works if you've been a long way from home and drive tired. Familiarity breeds carelessness. PP
What would a self-driving car do when the traffic light is red and a police officer makes you a quick sign to go ahead overriding the traffic light ?
I just saw this not happening and nearly leading to accident as a result. In many countries the emergency are allowed to run a red light but they do not have the right of way when doing so. More importantly, all the new vehicles do not used such rule-based approaches but can be trained fairly easily using examples. I think the point of the research is that training for everything becomes exponentially more difficult so different approaches are required.
There is no reason why a self driving car can't understand hand signals of a police officer and then it's a simple case of priority of hand signals over that of other rules like red/green lights. Waymo reported a while back that it can interpret hand signals of cyclists better than humans. Generally spotting patterns better than humans should be one thing that they can do well.
If it really was a problem then police could easily adapt to using something to signal to self driving cars - a fluorescent or light up band on their arm like used already in some countries.
Also , humans are not very good at spotting pedestrians running out in front of vehicles, self driving cars are much better. They have 100% conscious ability at all times when working, 360 degree visibility and can spot and track movement and intent much quicker than human reflexes. A human doesn't know whether someone waiting at the side of the road is going to suddenly jump into the road or not but similar cues that a human uses a machine could also use, just a lot quicker and all of the time.
I think there's much more difficult problems for self driving cars to solve than those, however.
Am I the only one thinking this self driving malarky has too many variables? The only way it's going to work in my humble opinion is with some sort of track in or at the side the of the road with meat bags responsible if they get in the way. Add to that self contained ,not connected to the "net", every car run by computer and you might have a winner. I think we have more chance of teleportation being invented first.
I'll just paste here (part of) a comment I posted a few days ago on a similar topic :
The only way to make safe driverless vehicles would be to put them on special lanes, perhaps specifically designed to avoid sharp angles; possibly with a system to keep them on trajectory at all times, like, some manner of metal railing? We could even mitigate the risk of collisions by having a bunch of them physically attached to each other. Oh, and then we could cut costs by devoting the propulsion function to a specialized unit. I think I'm on to something there, I'd better patent the idea before the Internet steals it!
> Self-driving cars won’t learn to drive well if they only copy human behaviour, according to Waymo
I hope it didn't take a PhD for someone there to figure that out. Meatsacks too often drive without reference to prevailing conditions, without anticipating what other meatsacks might be about to do, without a good night's sleep, with screaming kids in the back, paying attention to the radio/GPS/SMS/air conditioning knobs rather than the task at hand, with their seating position and mirrors just wrong, with boredom and wandering minds, without indicating, at inconsistent speeds, in the wrong lanes, towing too much for the rating of the vehicle, without maintaining their vehicles properly, often trained by other incompetent meatsacks who propagate the same bad habits.
As good as a human driver most definitely should not be considered the high watermark.