"the security is looking 'OK'"
Just don't try buying / selling the car secondhand:
https://www.theregister.co.uk/2018/08/09/connected_car_legal/
Car hacking wizards Charlie Miller and Chris Valasek have turned their attention to autonomous vehicles – and reckon the security is surprisingly good. The duo, who work for General Motors’ robo-automaker offshoot Cruise, told this year's Black Hat USA conference on Thursday while self-driving vehicles are much less hackable …
Since when is "OK" an acceptable level of security on a potential murder weapon? I would like something with better security than "OK" please. (Preferably no remote connectivity at all to begin with, it's a massive attack surface that is completely and utterly unnecessary)
So if there is the equivalent of a small server farm in an "autonomous" taxi boot, where will the passengers luggage go on the pick up from the train, airport etc?
.. Without even addressing issues such as how do the "infirm" (be it age, illness, disability, whatever) get their luggage into a cab without human help if they are physically incapable of lifting a 10 kg suitcase?
.. And where will a wheelchair go / customer get in from wheelchair (some wheelchairs fold down, most do not)
.. as someone who has to "ferry around" disabled relatives a lot I'm all too aware of how much space wheelchairs & other kit take up, and the human assistance often needed for disabled people to get in / out of car (even simple things like them finding seat belt fastening impossible)
.. Obv, a digression from the tech, but it's just another tech "advance" that is treating the disabled as invisible.
.. Ironically a disabled relative, would be an ideal customer for fully autonomous car, if it was disability friendly, as they currently rely on me (or other family members) to take them places or disability friendly taxis if we are unavailable
Erm, different markets there. People who need help with wheelchairs or luggage are orthogonal to the question of who drives them.
My infirmity is my eyesight. I think I'd be reasonably safe (though not legal - despite holding a full, clean licence) driving in good conditions, but lethal in the dark and wet. No problem lugging a heavy load. The fact that self-driving doesn't solve every problem doesn't mean it's not a potentially-excellent solution for some disabilities.
People with physical disabilities already have problem coping with normal cars. There are models that can be modified specially for them, but a wheelchair-bound person will never drive a Ferrari.
When autonomous cars have the self-driving and security parts down pat, it will be time to worry about models that can be adapted for the disabled.
Can you please remove the word "kit" out of your jargon library? I realize it's a plea to the entire country but I have an assclown here in the states that has adopted it as part of his vocabulary and it just makes it obvious that he doesn't have an original thought since he's just quoting everything from El Reg. At least if we can get him to stop using the word it will seem like he's done the actual research to backup his ridiculous position.
It's a prototype. It will be a lot smaller once it gets into production. My new car has 500GFLOPS of processing power in the main unit. I don't even know where it is physically located in the car.
That level of compute power would have put it top of the supercomputer league table till 1997, and would have needed a large room to house it.
"The other serious weak point is external communications. Autonomous vehicles are going to be updating their code, neural network models, and other datasets daily"
This seems to me a very bad idea. The idea that one can hack autonomous vehicle software together using whatever demented development "technology" is currently popular, then fix any problems in production is almost certainly going to lead to injuries, deaths, and enormous manufacturer financial liabilities. Good for lawyers. Not so good for those who share the road with these things.
The phrase "Blue Screen of Death" has a different meaning when the software in question can actually kill people. I think a more measured approach with few and VERY carefully controlled updates will prove to be necessary.
When the US had a manned space flight program, the Shuttle, the code was locked down far in advance and subjected to all sorts of testing to make sure that it wouldn't do anything unexpected in off-nominal conditions. This was around 18 months in advance so If you wanted changes, you were SOL unless you wanted to get bumped to the next available flight.
Updating software daily would be fraught with problems. As time goes by there are more and more hardware revs to deal with and testing needs to be done on all of them to insure that nothing breaks, like the brakes. The same could go for map sets. Roads change daily. Not "a" particular road, but plenty in a given region which means that a car fitted out with sensors to re-map streets and highways will be very busy 24/7 in addition to adding new data. Getting cm accurate data on just London will be a massive project and construction there is constant and ever changing. A car could be "looking" for curbs and trees, but there is a load of construction barriers instead to confuse things. That would be a problem if you were using an autonomous taxi to make it back home after a late night with the lads and frustrating if you had to wait 45 minutes for your ride to show up in the first place on a busy Friday night.
Which gets to a key point: even when talking to a bunch of professional hackers, the people working on these things aren't applying basic security thinking.
No thanks. No thanks until we go a month without an out-of-bounds bug being reported on El Reg. No thanks until we go a month without an article about how easy it is to spoof AI systems similar to the ones used in these cars. No thanks until we get some kind of demonstration (with a straight face) that the security in these systems does NOT completely break down when physical access is achieved. No thanks to IoT with one-ton objects that routinely travel at 60 mph.
At a societal level, I worry most about a system-wide breach that could have a million of these all turn into oncoming traffic at once. At a personal level, I don't want someone to decide that they are tired of what I have to say, so they bomb me with one.
No thanks.
"At a societal level, I worry most about a system-wide breach that could have a million of these all turn into oncoming traffic at once"
Or, a way to command every car with a certain system to brake suddenly and brick itself. Imagine that during rush hour on the M25. The tail back would be epic and every highway service vehicle will be working all night to get the cars removed.
"This means at the first sign that things are awry, the car can be remotely ordered to pull to the side of the road, shut down, and await pickup."
Okay, so all I have to do is make the car think something is "awry", and it will automatically pull over for the impending car-jacking. It is a design feature. Got it.
Coincidentally, I was having a social chat with someone from Audi just this week. No self-driving car from Audi for the time being. They have been doing tests and research with an A7 I think he said, but there are no plans to take this into production yet.
No electric Audis either. VWs all you want from 2020-ish but Audi are going the hybrid(!) way apparently. Diesel-electric hybrid to boot.
That's what I have been told anyway. I am not claiming he was right, though he should have a pretty good idea what's going on.
I'm a big EV advocate, but I have to admit that there are still some people that do need greater range or don't have easy access to charging on a regular basis. Switching VW to all electric is good PR for that brand given some issues they put themselves in the middle of. Audi can be their brand umbrella for ICEV and Hybrids until it makes sense to convert them to all electric as well.
Nothing that uses or connects to the net, cloud, VPN, whatever is secure. At least not for long. Second, as Tesla, BMW, Volvo & others recently found out in IIHS real world testing, self-driving cars are closer to being self-wrecking. And why haven't there been any real world testing done during inclement weather? The systems may work just fine in the Sonoran Desert at high noon in August, but what about Buffalo in February or Florida in March?