back to article Pedals and wheel in that Google robo-car or it's off the road – Cali DMV

Google's driverless car that eschews a steering wheel and pedals has hit a legal road bump in California. The state's Department of Motor Vehicles has issued new regulations, due to kick in on September 16, that insist all robot cars come equipped with a steering wheel and pedals in case the vehicle's computer literally …

  1. Charles 9

    Kind of begs an intriguing question. Given that each type of driver (human and computer) has some form of failure mode (inattentive human, glitching computer), which should be the default more-reliable case in the event of conflicting input?

    1. Jon 37

      There's been a similar debate in aircraft, with no clear winner.

      A modern plane is flown by a computer which takes instructions from the pilot ("Fly By Wire"). The computer tries to decide if the pilot is doing something dangerous - e.g. trying to do a turn that's so tight that the wings might fall off. Airbus computers simply won't let you do it - the computer overrides the pilots. Boeing computers will seriously protest (sounding lots of alarms etc) but they will let the pilot override them if the pilot pushes hard on the controls.

      A quick Google for "boeing airbus computer pilot" turned up these, but there are many more pages on the subject:

      http://www.democraticunderground.com/discuss/duboard.php?az=view_all&address=389x5846986

      http://en.wikipedia.org/wiki/Flight_envelope_protection

    2. Vector

      All I have to say is Egg freckles

    3. Charles Manning

      which should be the default more-reliable case in the event of conflicting input?

      One of the biggest headaches with designing any of these systems is handling the hand over from the computer control to the "backup" meatsack.

      When everyhting is going well, the computer will, generally be able to do a better job of driving and will even handle extreme control situations better than people do. For example, this is why some fighter jets that are far too unstable for a human to control are flown fine when the computer is working.

      Unfortunately the Plan B for almost all automated systems (autopilots, ...) is to disengage and hand back control to the meatsack.

      This introduces three major dilemmas:

      1) Loss of situational awareness: The meatsack has not been involved in the control and is not sufficiently aware to take control. In the case of an autonomious car, the driver has probably been reading a book, LOLling on twitter or whatever. Suddenly the controls think everything is too hard and dump control in the lap of the driver who is not sufficiently informed to take effective control. By the time the driver works out what is going on it is too late.

      2) Exceeding human capability: Computers can cope with some control situations than people can, most speficially they can operate faster and with greater precision. If the computer is giving up, then the chances are a person is incapable of taking effective control. Bad things happen. This happened in an Air China crash some years ago where a thrust controller had kept tweaking things until if was forced to give up. As a result, the meatsack had no chance of recovering the plane and it crashed.

      Thus, the control system has to be set up to give up when the meatsack still has a chance of coping - largely negating the benefits of the computer.

      But is that the right decision? The compter is likely more skilled and probably more likely to recover from the situation. But if the computer tries more and then really does need to hand over, the pilot is in an even worse position.

      Damned if you do, damned if you don't. Not at all easy to come up with a good decision.

      3) Confusion of the control surface: As soon as there are more than one controllers (two people, or a computer and a person), there is the opportunity for some control to fall through the cracks.

      A classic example of this is the driver using cruise control for the first time. There have been more than one occurrance of rear-ending the car in front? Why? Well in the criver's mind they hve handed over speed control to the cruise control. Part of speed control is braking when required. Unfortunately the cruse control does not brake when required. Drivers have literally watched the crash happen over a period of seconds dumbfounded that the computer did not slow down.

      Of course it all makes sense in the clear light of day, but under the stress of the event, the driver's brain shuts down some of its thinking ability. (See http://en.wikipedia.org/wiki/Incident_pit for an explanation why).

      In short, mixing human and computer control is a bloody nightmare.

      The only people that will definitely benefit are the lawyers who outnumber engineers 5:1 in California.

      1. Rule of Thumb

        Re: which should be the default more-reliable case in the event of conflicting input?

        Let's not forget that cars have a fairly comprehensive backup plan: slow down or stop. I've seen meatsacks deploy this repeatedly on my commute. The main problem with that plan are that the vehicles behind you might be operated by stupid meatsacks. The sooner cars drive themselves, the better as far as I'm concerned.

        1. Ole Juul

          Re: which should be the default more-reliable case in the event of conflicting input?

          I do have some faith in Google technology, but can they actually make it safe outside of a city environment?

          I'm sure that Google can figure out how to get cars to work together on the road reliably. However, when a car is going down the road, does it stop when there are people at the side? Probably not. Does it stop if those people are toddlers? I am curious at how Google intends to handle that one.

          Also, in this rural area we have different road hazards. Rocks keep rolling down the mountain sides and there is debris on most highways. Probably an automated car can navigate that, and perhaps even quicker than a human.

          Now what happens when there are lots of deer. Some people get bent out of shape if the deer get hurt. Some people get bent out of shape because they get dents in their car, which can happen at quite low speeds. Some people just get bent out of shape because a deer through the windshield happened to kill or injure them.

          If deer are on the road, probably no problem. The car will go up close to them and then wait. If it doesn't get really close however, the wait could be quite long.

          If deer are on the side and there is a group of them, one is more than likely to get confused and will run right in front of the car in the last moment. Does the automated car consider this? Humans do, and will usually stop to see what the animals are going to do, employing knowledge such the fact that deer tend to escape to the uphill side, or young ones tend to run to their mothers. Without some sort of strategy, one could be sitting in one's car and waiting for quite a while, and frequently. I'm afraid that human control is essential in a rural situation. If Google can do it, fine, but I'm doubtful.

        2. Anonymous Coward
          Anonymous Coward

          Re: which should be the default more-reliable case in the event of conflicting input?

          The sooner cars drive themselves, the better as far as I'm concerned.

          Disagree. The comparison with airplanes is apt:

          1 - their rules and compliance took literally DECADES to get right under what is already very tight, globally applied regulations. Given Google's lack of enthusiasm to comply with any rules that it doesn't like or that get in the way of making money I generally have zero confidence that sanity and safety would win out over profit.

          2 - there are considerably fewer aircraft in each other's vicinity than there are cars. You can diss human drivers all you want, but even the stupid ones are generally capable of handling a lot more variables than a computer. Granted, driving is not *that* onerous a task to automate, but the evidence that an automated car is safer is as yet not available because there are simply none on the public road yet.

          Being on a safe trajectory versus the real public road is like assuming that driving in a quiet rural setting equates trying to get across the Place de l'Étoile (sorry, Place Charles de Gaulle) in Paris just when rush hour is building up. You're playing with people's lives here, so you better do this right, and the requirement to have a per-car insurance pool is an underhanded way to force companies to pay as much attention to safety and fallback as possible.

          Those who state that the human has no situational awareness seem to start from a position that control is suddenly handed over to an unprepared passenger, but you forget that this may be deliberate act or need. A good example of this could be a hybrid environment where automation is permitted on certain routes like motorways, but manual control is demanded like inner cities. Another option is that the car guidance is not allowed to have knowledge of certain facilities (like military locations) and must thus driven in by manual control (although the reverse may be true too - if the car knows, we no longer need road signs).

          I would suggest that robo-cars should have been actively on the public road for at least 5 years to a decade before you can start taking the human controls out - I find this headlong rushing ahead rather questionable.

          1. John Brown (no body) Silver badge

            Re: which should be the default more-reliable case in the event of conflicting input?

            "I find this headlong rushing ahead rather questionable."

            Me too. It seems to be a disease in the electronics/tech/computer world that bigger/faster/more powerful must always be better and every tech company must keep advancing as quickly as possible, usually at the expense of reliability or completeness of product.

            Unlike with computers, laptops, tablets and most especially mobile phones, people are simply not going to "upgrade" to the latest model of car every year or two. Maybe in 10 or 20 years time, if society changes or is nudged in the direction of not privately owning cars, then maybe people will just accept whichever car turns up at the doorstep when requested.

          2. Anonymous Coward
            Anonymous Coward

            Re: which should be the default more-reliable case in the event of conflicting input?

            Given Google's lack of enthusiasm to comply with any rules that it doesn't like or that get in the way of making money I generally have zero confidence that sanity and safety would win out over profit

            Especially in the light of Google apparently deciding for us which data the DMV and the public should have access to. They haven't even gone live yet and they are already controlling what data third parties get to analyse. Ab-so-lu-te-ly no way - it should definitely not be for Google to decide which data it should hand over - if the DMV doesn't have the expertise to analyse data it should get it from outside, not accept any filtering from Google.

            Of course, it can also REFUSE the application. That is, of course, a very cost effective way to deal with the matter, given the above there may be some doubt that the DMV is really aware of all the facts to base a decision on, so until there is certainty that it knows it all it should default to safe.

            I guess that is a too straightforward conclusion to stand a chance at all - there are $$$ involved...

          3. Alan Brown Silver badge

            Re: which should be the default more-reliable case in the event of conflicting input?

            "I would suggest that robo-cars should have been actively on the public road for at least 5 years to a decade before you can start taking the human controls out - I find this headlong rushing ahead rather questionable."

            Automation levels on cars have been ramping up slowly for years. My 2003-model car with adaptive cruise control (and braking) is perfectly capable of low speed autobraking in the software but all such features (including braking applications of more than 25%) were disabled below 25mph on the basis that it wasn't tested enough. Fast forward 10 years and autobraking is rapidly becoming a standard feature - as are 360 degree cameras for parking, but they can be repurposed into crash analysis devices with only minor extra work - a multichannel DVR which also picks up all the CAN data that used to only reside in the airbag computer (Cars with airbags have had integral black boxes for 20 years - something that the insurance industry has been very quiet about)

      2. Mike Dimmick

        Re: which should be the default more-reliable case in the event of conflicting input?

        @Charles Manning: Air France 447 is a case in which all three of your points occurred. And that occurred with experienced pilots who had been taking shifts in order to ensure they were fresh and alert. They still failed to recognise the situation they were in, applied inappropriate control inputs and crashed the plane.

        Frankly, I think auto-pilot cars are dangerous as the meatsacks will be tempted not to pay any attention. I also haven't seen any sign that the control software will be designed to proper safety standards, not provision for independent, redundant control systems as used in aircraft. Cars may be slower and closer to the ground, but there are many more of them packed much closer together.

      3. Alan Brown Silver badge

        Re: which should be the default more-reliable case in the event of conflicting input?

        "Drivers have literally watched the crash happen over a period of seconds dumbfounded that the computer did not slow down."

        Having spent most of the last decade driving a car with lidar-assisted adaptive cruise control I was a little disconcerted when the new vehicle I was trying out _didn't_ speed match with surrounding traffic but switching off the cruise and doing the old fashioned way worked just fine.

        Cruise controls in the 70-90s were pretty dumb affairs and I found they were more of a nuisance than anything else in anything other than "open road, straight lines, no other traffic" situations. ACC changed all that and I'd never go back to a dumb mode again (amongst other things, ACC will lower the speed setting automatically if the steering wheel is turned beyond about 30 degrees and sets the following distance to 2-3 seconds with a bit of fuzz to avoid annoyance if behind a on/off driver.)

        A lot of the problem with "modern technology" is that implementations are half arsed - and as the article says, if something is going seriously wrong, handing back to the meatsack will probably make things a hell of a lot worse (which is why f'instance, Electronic Stability programs don't ever automatically disengage - and some are never completely disengaged no matter what the driver might think)

        As far as liability goes: $5million public liability is pretty easy to get. Most householder/business insurance policies come with similar values for between $10-35/year. Insurance companies work on actuary tables and so far the automated cars have a far better safety record than even above-average humans. (being punted from behind whilst stopped isn't a problem with the automated system.)

        As for planes: the stated reasons for crashes aren't always the actual cause. One crash I'm aware of in Myanmar was blamed on the chinese aircraft having poor brakes - nothing to do with the pilot in command being on his first landing at the airport, landing downwind and touching down around 3000 feet beyond the threshold on a 5000 foot long runway - the pilot in charge should have taken over and gone around long before it got to that point.

        The French mid-atlantic crash was arguably blameable on the pilots forgetting one of the fundamental rules of flying - only one set of hands is ever in charge at a time (the software failure was one of those "pilots would never do this, so we won't program for it" modes - it shouldn't have averaged opposite inputs and simply given the right hand seat full control.)

        The author is right that mixing human and computer control is a nightmare but a large number of failings come down to the mindset of "this will never happen" and not working out a way of recovering from it (Does Google have an algorithm for "car has been struck by lightning"? if not, why not? It does happen - but given appropriate sensors it should be able to work out that a strike is imminent and lock itself down in preparation for the event.)

  2. Anonymous Coward
    Anonymous Coward

    I hope the insurance requirement applies to all

    I would love to hear the screams of rage from all self-insured companies.

    How much do you think bus, train and plane tickets would have to rise to cover this?

  3. Jess--

    And in tomorrows news...

    Google buys an insurance company

  4. Oldfogey

    Not such a big deal

    I just looked at my car insurance, and I find I'm covered for £10mil for third party damage. Why so high? Well, some years ago a car had a blow out on the motorway, went off the road into the hanger of a local airport. Burnt out a whole hanger full of private jets.

    That insurance costs me £99 pa

    1. ThomH

      Re: Not such a big deal

      One of the more surprising things is that car (/auto) insurance seems to be a lot cheaper here in the US _despite_ also needing to include the medical bills of anyone you hit per their lack of nationalised healthcare. I doubt that $5m of insurance will cost more than a few hundred dollars a month, especially if these cars really are much less likely to have accidents.

      1. John Brown (no body) Silver badge

        Re: Not such a big deal

        "I doubt that $5m of insurance will cost more than a few hundred dollars a month"

        He was talking about $16million of cover for $160 per YEAR. (very rough £-->$ conversion, ICBA to look it up and it's nothing which normally concerns me. I have a feeling it's more like £1-->$1.70ish)

        But that sounds suspiciously cheap even by UK standards, even for 3rd party only insurance (the bare minimum) I think most people pay in the order of £1000 per year for fully comp. I wouldn't know. I've driven company cars only for the last 20+ years so don't pay for things like car tax or insurance :-)

        1. rosswilson

          Re: Not such a big deal

          "I think most people pay in the order of £1000 per year for fully comp"

          I only pay around £350 for my fully comp. From talking to friends what I pay is quite typical.

    2. Charles Manning

      Re: Not such a big deal

      Your insurance premium is based on the risk (or at least the perceived risjk) of a payout.

      If you tell the insurance company you want a GBP10M policy that covers you being hit by an asteroid. You''l probably pay less than GBP 50 pa.

      Then ask for the same cover for base jumping and rock climbing.

      Same deal here. The insurance people expect a larger chance of a payout, not just because of the chance of a crash, but because they know the lawyers would be onto this as fast as anything and the courts would award a huge payout.

  5. Old Handle
    Meh

    The manual control requirement is perfectly reasonable, but the insurance requirement is kind of ridiculous. Human-driven cars only need $35,000 worth of liability insurance. And you do have the option of putting up a bond for that amount instead (not that many people actually do). So I don't understand why the requirements need to be so much stricter. Surely it's already been proven that self-driving cars are not 143 times more dangerous!

    1. Henry Wertz 1 Gold badge

      liability insurance

      "The manual control requirement is perfectly reasonable, but the insurance requirement is kind of ridiculous. Human-driven cars only need $35,000 worth of liability insurance."

      For now, I think it's due to the fear of a faulty design just locking up and plowing through... well... more than $35,000 worth of stuff.

      These designs are experimental after all. What are the failure modes? What should be done in case of catostrophic fault (for example if the computer locks up or crashes)? I seriously doubt it'd cause $5 million in damage, or even close to it. But, the "big fear" is a faulty design where it just locks at whatever steering angle and accelerator position it was at; I do think they'll take safety precautions and this unlikely. But does the car just suddenly come to a dead stop? This can be dangerous too if it's in the middle of the road or going around a curve. Does a secondary system try to pull it over to the shoulder? These are things that'll have to be worked out.

      I expect the getting $5 million coverage from a commercial provider may be to get an insurance co to look at these vehicles as they would before insuring any new make/model of car, and see if there are ways the insurance co suggests to make the car safer that the engineers didn't think of.

    2. Nuke
      WTF?

      @ Henry Wertz 1

      Wrote :- "Human-driven cars only need $35,000 worth of liability insurance.

      I'm interested where you got that figure. What country do you live in? I am in the UK and thought it was several million. I'd be interested in putting up a $35,000 (in GBP equivalent) bond rather than pay my ~300 GBP annual insurance premium. It is a better return than many bank savings accouns.

  6. Liam2

    And then some state like Nevada passes much more friendly legislation that convinces Google to do all their testing there.

    1. 142
      Pint

      Nail on the head. That's exactly what'll happen.

  7. Dave, Portsmouth

    Driving test?

    Surely if the autopilot can pass a standard human driving test, it's 95% of the way there? We don't test human drivers to be able to cope with stressful situations, deer in the road, etc... and many don't cope, hence so many road accidents! I think self-driving cars are definitely a big answer for our roads in the future - both for safety and to reduce congestion - but can foresee them getting bogged down in regulation and public fear. They shouldn't need to be perfect - the alternative isn't perfect either!

    1. ratfox

      Re: Driving test?

      The reason tests for computers are more strict is that computers cannot be expected to have common sense.

      E.g if the whole road has disappeared due to a landslide, the human is expected to stop; no need to test the situation. But you need to check what the computer will do.

      1. Charles 9

        Re: Driving test?

        I don't believe in common sense. Or rather, I think it's rather not so common because it seems to differ from place to place. In any event, this is something for the programmers and testers to deal with. In essence, they have to BUILD a machine common sense. Train the computer to note that if it cannot locate the road some distance ahead it should come to a stop before then. If animals (including humans) are on the side of the road, perhaps it should set itself up to take an evasive maneuver if necessary: slow down, edge away from them, etc. We generally learn about these things; we don't just remember actually learning them: probably because it was through observation. Similarly, we need to learn what various things are. We just need to develop analogues for the driving computers: ways to identify the various things it detects and the various procedures to use in these situations.

        1. Anonymous Coward
          Anonymous Coward

          Re: Driving test?

          I don't believe in common sense. Or rather, I think it's rather not so common

          Upvote for that bit alone..

      2. Alan Brown Silver badge

        Re: Driving test?

        "E.g if the whole road has disappeared due to a landslide, the human is expected to stop; no need to test the situation."

        I know (and have seen) humans who would carry on regardless, overconfident in their ability to drive over the rubble in their 4wd.

        They invariably end up having to be rescued.

  8. D Moss Esq

    From the archives, 1999

    http://www.snopes.com/humor/jokes/autos.asp

    QUOTE

    At a computer expo (COMDEX), Bill Gates reportedly compared the computer industry with the auto industry and stated: "If GM had kept up with the technology like the computer industry has, we would all be driving $25.00 cars that got 1,000 miles to the gallon."

    In response to Bill's comments, General Motors issued a press release (by Mr. Welch himself) stating:

    If GM had developed technology like Microsoft, we would all be driving cars with the following characteristics:

    1. For no reason at all, your car would crash twice a day. [Rather like my Android smartphone]

    2. Every time they repainted the lines on the road, you would have to buy a new car.

    3. Occasionally, executing a manoeuver such as a left-turn would cause your car to shut down and refuse to restart, and you would have to reinstall the engine. [Rather like my Android smartphone]

    4. When your car died on the freeway for no reason, you would just accept this, restart and drive on. [Rather like my Android smartphone]

    5. Only one person at a time could use the car, unless you bought 'Car95' or 'CarNT', and then added more seats.

    6. Apple would make a car powered by the sun, reliable, five times as fast, and twice as easy to drive, but would run on only five per cent of the roads.

    7. Oil, water temperature and alternator warning lights would be replaced by a single 'general car default' warning light.

    8. New seats would force every-one to have the same size butt.

    9. The airbag would say 'Are you sure?' before going off.

    10. Occasionally, for no reason, your car would lock you out and refuse to let you in until you simultaneously lifted the door handle, turned the key, and grabbed the radio antenna. [Rather like my Android smartphone]

    11. GM would require all car buyers to also purchase a deluxe set of road maps from Rand-McNally (a subsidiary of GM), even though they neither need them nor want them. Trying to delete this option would immediately cause the car's performance to diminish by 50 per cent or more. Moreover, GM would become a target for investigation by the Justice Department.

    12. Every time GM introduced a new model, car buyers would have to learn how to drive all over again because none of the controls would operate in the same manner as the old car.

    13. You would press the 'start' button to shut off the engine.

    UNQUOTE

    1. Anonymous Coward
      Anonymous Coward

      Re: From the archives, 1999

      Can't imagine why you got down votes for a joke, Fandroids maybe?

      1. hplasm
        Windows

        Re: From the archives, 1999

        As if anyone would run an autonmous car on windows,nevermind Google.

    2. Dig

      Re: From the archives, 1999

      You haven't got a Nexus 4 by any chance. Mines constantly freezing with a blank screen requiring me to turn off then back on.

      Thankfully the long off press always seems to work, this must be either hardware controlled or have a separate controller.

      1. D Moss Esq

        Re: From the archives, 1999

        No, a Sony Xperia. Restarting it involves taking the back cover off and jamming a 2" No.8 into the reset hole.

      2. Alan Brown Silver badge

        Re: From the archives, 1999

        "You haven't got a Nexus 4 by any chance. Mines constantly freezing with a blank screen requiring me to turn off then back on."

        The usual reason for android phones playing up are filesystem corruption or full up.

        Ext4 is tough, but the default mount options leave it vulnerable (there's a speed/reliability tradeoff and phonemakers are focussing on the former). If you can convert to f2fs, things are generally a LOT more reliable (be prepared for advanced geeekery to get there)

    3. Anonymous Coward
      Anonymous Coward

      Strangely, that is EXACTLY what happened..

      13. You would press the 'start' button to shut off the engine.

      ANY keyless car already does this. As a matter of fact, there have already been instances where it was proven impossible to kill the engine in an emergency - and now we want to take ALL controls out of the driver's hands?

      I will only ever sit in an automated car where there is a big fat handle labelled ABORT that does something MECHANICAL to kill the damn thing and open the door. I don't care that it may be behind glass and subject to a gazillion dollar fine if abused, but it should work, and it should work always, irrespective of the car standing still or moving at 70 mph - because the sensors may get it wrong too.

      If you trust computers and automation that much you haven't been working in IT long enough :)

      1. Nuke
        Meh

        Re: Strangely, that is EXACTLY what happened..

        Wrote :- "I will only ever sit in an automated car where there is a big fat handle labelled ABORT that does something MECHANICAL to kill the damn thing and open the door.

        Same here. My experience with most computer controlled things (operating systems for a start, then word processors as another example) is that I very soon find that I have a scenario that the programmer never thought of, or assumed would never be required. Automated help lines of banks are another example ("Press 1 to increase your overdraft, Press 2 to find out your balance, Press 3 to set up a Direct Debit, etc etc") - I am never calling about anything they have thought of.

        So we are to believe that those who program automated cars have thought of every situation that might arise on the road - orders of magnitude more situations than could arise with a bank account?

      2. normcf

        Re: Strangely, that is EXACTLY what happened..

        Perhaps an ejector seat, but don't try it in a tunnel.

  9. phil dude
    Megaphone

    amazing fud...

    It is amazing for a tech aware population , just how much fud is being subscribed to.

    Ironically, FOSS is one place where all robocars could save themselves a world of hurt, because once something is found to be reliable, it can be included everywhere. We can dream...

    Everyone who thinks that robocars are going to be *worse* than humans are just plain ignoring the facts that human drivers cause carnage. What seems to be the problem is that humans perception of statistics always tend to "it will not happen to me" and "I'm a good attentive driver, everyone else is terrible".

    Actuaries actually spend their lives calculating this with real numbers. This is as close to empirical data as possible in the light of potentially infinite ways in which "things can go wrong". If it is genuinely risky, they will tell us, I am sure...

    One can imagine the major risks of driving are the other drivers. This is at least one problem that gets solved by robocar.

    One of the very public things that google has made clear is that the racking up of stress free miles is part of their strategy to work out the bugs. I think we can agree it is non-trivial tech?

    The elephant in the room is not that we will FAIL to get the cars working safely. We will succeed and then you will have restrictions placed on where the car will take you, which might be set by the <insert power structure> that doesn't profit from it. E.g. taxi drivers (competition), the police (tickets), eco-nuts (how much), local government (loss of parking revenue), the 0.1% (forget gated communities, I'll bet certain cars will not be allowed within 50 miles).

    I cannot wait for them to become common. Mainly because too many people in society are unable or have lost their ability to drive, and this would help redress the balance.

    Like everyone else, I am keen to see how this turns out...

    P.

    1. Alan Brown Silver badge

      Re: amazing fud...

      If robocars become as ubiquitous as hyped, the likely scenario is that

      1: Engaging manual mode requires a license which is significantly harder to obtain than current requirements and you'll have to prove competence regularly.

      2: Manual mode will set a beacon causing robocars to take extreme care around manually controlled vehicles

      3: Engaging manual mode on certain roads without a damned good reason will be illegal.

      Plenty of SF has talked about cars autopiloting on expressways and only going over to manual on "surface streets" - motorways, etc are likely to become the first "robocars only" zones.

  10. Anonymous Coward
    Anonymous Coward

    Next

    Could you use a driverless car if you were intoxicated?

    Or would a requirement for a steering wheel and brake make that illegal?

    How about age, could a kid be sent off the Grandmas house in one, a kid too young to operate said wheel and brake?

  11. david 12 Silver badge

    Parts are in place

    A true driverless car requires 3 things:

    A mapping system, like google maps, so that it can plot routes.

    A control system, like Siri, Skyvi, Cortana, or Google Now, to take routing commands

    A lane keeping// cruise control system like that on expensive cars

    --all at an affordable price. If you've used some of those, you know that at present, it's "almost their", and looks like being that way for some time yet. This is still a few years off competing with Ford/Toyoto/BMW

  12. Adam Inistrator

    turing test?

    can an inspector tell if it is driven by a human or not after being in a car with a blind between driver and passenger for one hour - or something like that.

    also autoautos to be bright orange initially. over 10 years we will adapt as we learn

  13. IWVC

    Legal issues and insurance

    Been discussing driver-less cars down the pub with former colleagues all of who worked as engineers and/or administrators at a government department responsible for vehicle and road safety. One of us has actually experienced being driven in a fully autonomous car in Germany. He was in the normal driving position (on the left) without any controls. However the car was originally built for the UK market and had a driver with full controls on the right side....

    That aside he was favourably impressed and believes the technology is viable. The rest of us are more cynical and wonder how driver-less cars comply with the law. If one has an accident or contravenes regulations who is liable - the owner, someone in the vehicle nominated as "Driver" or the programmer who wrote the software controlling the vehicle? (in UK many "fixed penalty" fines issued by autonomous systems such as speed cameras are directed at the vehicle owner and it is the responsibility of the owner to prove that someone else was driving at the time. However note that as a certain ex government minister discovered lying to dodge a fine is fraught with danger.)

    In Europe the Geneva and Vienna conventions on road traffic require vehicles to have drivers in control so the California regulation changes are not new.

    On insurance liabilities $5 million is not that great. In the 1970's Ford in the US cynically concluded that as the likely cost of legal claims against the company for deaths and injuries caused by their Pinto model bursting into flames as a result of a rear end impact was substantially less than the cost of redesigning the car to cure the known fault they would pay the costs and not redesign it. This logic went somewhat wrong when a court, once made aware of this company policy, awarded £126million in damages to the badly burnt survivor of an impact. Although this was later dropped to I think $3.5 million this still has to be considered in terms of 1970s values and is much greater than $5 million now.

    In the UK you can legally drive without insurance provided you deposit £500 000 with the government. I'm not sure what happens if a claim of more than that amount is made against someone using this provision.

  14. crediblywitless

    Google will presumably be looking at buying a motor insurance company, then.

    1. IWVC

      Re: Technically viable and Google insurance company

      They may have to as current UK companies don't seem to want the business

      http://t.cars.uk.msn.com/features/can-you-get-insured-on-a-driverless-car#image=1

      more background on driver less cars here

      http://www.technologyreview.com/featuredstory/520431/driverless-cars-are-further-away-than-you-think/

  15. Pascal Monett Silver badge

    Re: "the technology is viable"

    Of course it is viable. We will have driverless cars in the future, I have no doubt of that.

    Some people here are comparing this technology with their own experience in IT. I'll wager that none of those people have had a hand in aircraft AI development. As I work in IT, I can understand that, generally speaking, nobody wants the average programmer to get anywhere near an application that is supposed to handle controlling a vehicle that contains people, or goes near people. Given the generally poor level of exception handling and the very limited foresight of most programs, it would indeed be suicide to leave such a task to the average developer.

    I am confident that the automotive industry has an enormous amount of data and experience in car behaviour, and I am convinced that Google is not learning everything from scratch. Google must have experienced automotive consultants on this project, and I am certain that Google has a comprehensive list of use cases to test with its autoAI.

    Not that I think that Google is a saintly organisation that is doing all this out of the pureness of its heart. We all know now that Google is an advocate of DRM, on top of being the most common spook in our lives. No, if I think that Google is doing its best to make a truly safe, automated car, it is because it would be commercial suicide if it put a half-assed solution on the road that started running over kids.

    Not even the fortune of a Brin would avoid prison for that. It would be the ruin of several very wealthy billionaires, and I don't think they would like the idea of being ruined and in jail.

    I look forward to a future where Google knows where everyone goes before they get there, in order to better target the relevant ads to us - because it's "what the customer wants" (the customer, in this case, being the companies that want eyeballs on their ads, of course).

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like