irresponsible
One of the nicer words I use to describe Zuck himself.
Facebook founder Mark Zuckerberg has told Elon Musk off for scaring people about “AI”. Musk has responded by saying Zuckerberg's understanding is "limited". “I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don't understand it. It's really negative and in some ways I actually think it is …
“If you're arguing against AI then you're arguing against safer cars that aren't going to have accidents"-- Zuckerberg
If that's his logic for favouring AI then he's clearly missing Musk's point entirely. It should be fairly obvious to him that Musk isn't arguing against the kind of AI that would give us safe self-driving vehicles.
Devaluation of the term 'AI' is what causes people to believe that a self driving car would necessarily possess some "kind of AI".
The true general AI smarter than humans Musk is arguing about is so far in the future neither he nor the large majority of those reading this will have cause to worry about it in their lifetime. If ever.
If you can't disclose what scared you so badly, Elon, don't expect us to panic just because someone says we should. If you really have a point to make, find some more convincing way to make it, or get used to your Cassandra Syndrome.
Precisely. Especially the personal assistants, which after nearly two decades of advancement are just as frustrating as Microsoft Clippy. This is not AI, it's data whoring sh*t.
I personally doubt that the majority of middle class jobs will be replaced by AI, it's simply that the roles people have will change to reflect the increased capabilities of man & machine as a system. The personal computer revolution has completely re-defined what it's possible to achieve in the workplace, not simply replaced a bunch of typists and clerks like for like.
The world will get more complex, and we'll all need to do a more complex variety of tasks to deliver competitive outputs.
Well there has been some progress. Two decades on the AI or whatever it is can work out what words you are saying without first having to learn your voice and make a reasonable stab at working out what you mean and giving an educated answer. What they can't do is code a better version of themselves. The day an AI manages to do that and replaces itself with the improved version is the start of when the rate of progress "might" increase substantially.
This is not AI, it's data whoring sh*t.
True.
But your timeline is off. People called Lotus Agenda a PIM, in the 80's.
Although it was written with input from people on the AI side at Stanford I don't think they specifically called it AI.
Icon because I always thought there should have been an option to turn "Clippy" into "Gimpy."
I'm with Zuckerberg here. Currently we have no freaking idea what
* intelligence
* thought
* consciousness
are and the algos we've come up with up to this moment are very advanced algos for solving very narrow tasks. And they often suck at what they do - much touted image recognition requires terabytes of data and fails at recognizing very unusual things toddlers have no troubles identifying right off the bat. Many computer scientists reckon that we'll soon have another AI research winter.
There's very little if any progress towards AGI. We may all sleep well for at least 200 more years.
I don't think the current worry of AI is sentient robots who see the human race as flawed so must eradicate them, but more along the lines of people letting algorithms decide the fate of people's lives in the fields of healthcare & insurance, especially with the ever increasing research into human DNA and the how people are trying to use that information to give a percentage on how likely you are to get certain types of cancer in your life.
That would really suck, unable to get life insurance even though you're fit as a fiddle but the computer says no because it looked at your DNA.
If we do get wiped out by robots it'll probably be due to apathy not a Terminator style war.
"I for one welcome our robot overlords, because I'm too lazy to fi.... Siri, turn the tv to the funny clips channel."
Yes. And that doesn't have to be anything like AI. An algorithm can be the matrix a judge uses to sentence a shoplifter to ten years for stealing a six-pack of diet Mountain Dew. An algorithm can be the interlocking policies an insurance company uses to deny a melanoma-screening test, a denial which could cost the patient her life. Etc.
"Algorithm" has come to be associated with computers but it is just a set of problem-solving steps. An algorithm can a paper flowchart, or matrix, or a notebook of procedures, no silicon at all. It's giving the abstract procedure precedence over human evaluation and intuition that is risky.
I just went to the doc. Thought I had a duodenal ulcer. He had my records, of course, and his personal knowledge of me, and about 40 years of experience as a GP. He listened to me, palpated my guts, and said "Nope, I think it's a flare-up of irritable bowel syndrome." Gave me some pills which would work against IBS (but not against ulcers), and counseled me on diet and exercise. He was right. That night the stomach pain did not keep me awake until 4:00 AM; in fact, it never came at all. It was the right pill.
Point of the long story: human knowledge, logic, and intuition are very, very hard to beat. AI ain't even at the starting line yet.
Indeed.
In fact insurance companies were one of the big users of "decision tables," to let them code the rules into software and let staff experts understand the rules they were going to use.
Very few people use DT's, probably because very few courses teach them. They are simple, allow non IT specialists to understand and review code logic and can be made Turing complete. Art Lew of the University of Hawaii probably did the most to develop them by improving optimizing techniques for them.
> ... letting algorithms decide the fate of people's lives in the fields of healthcare & insurance...
And not just there. How about handing over the operation of the justice system to AI?
As long as people such as Zuckerberg keep touting AI (whether or not it's true AI), ill-informed people will want to use it in all sorts of roles it's unsuited for. Maybe we should replace those ill-informed decision-makers (but not with AI, I hope).
Let's see... Can be blinded by little lasers. Cannot detect anything spray painted with VantaBlack.
I wonder what happens when one of those cars tries to back up over parking lot tire rippers painted with VantaBlack? Maybe just paint a STOP sign or a few road markings. A little bit of vandalism could be very interesting.
Why can't they detect anything with VantaBlack? Are you saying self driving cars will use visible light only? That would be stupid, given that we can make sensors using sonar, radar, and infrared that WOULD detect such a car. It is only idiots like Musk who are trying to rush self driving technology before it is ready selling cars with such limited perception that can be fooled by tricks of the light.
Even if you do cover your car in it, unless you don't plan to see out of it you'll still have some glass, so it wouldn't be completely invisible at night even painted VantaBlack. Heck, it is light enough even in rural areas where the Milky Way provides its own light that you'd still be able to see the contrast of the darker car against the lighter areas reflecting the faint starlight.
I was in such a place last month, and was easily able to see the contrast between a gravel road and grass, and grass and surrounding dark green vegetation, despite having only the Milky Way - no Moon or artificial light of any kind. Obviously it is possible to make a low light optical sensor better than my eyes, even though it would be dumb to limit a car in such a way when that's not necessary.
VamtaBlack absorbs everything, even lasers.
In fact, Vantablack absorbs more than just visible light, and is equally-effective across a whole range of the spectrum that is invisible to the human eye. It is used in applications ranging from space-borne scientific instrumentation to luxury goods, and its ability to deceive the eye opens-up a whole range of possibilities in design.
It isn't about painting the car. It is about painting things that must be detected and that requires visible light during the daytime. It is also why it can be blinded by a little laser. That is reported on this site.
What would happen if a person wore VantaBlack?
It is invisible to radar and so are nearly all plastics. Now you have something that is invisible to everything. Not exactly a good thing. As for sonar, it is not good for an Autocar at speed. That is why they aren't using it. They are using electromagnetic waves, not ocean waves.
(Autocar == garbage)
"very unlikely it would absorb 100% of say those high powered green lasers idiots point at airplanes."
I think it most likely would suck up such a laser. I have one but I use it to point at stars.
"What would happen if a person wore VantaBlack?"
It would stop being very black in an expeditious fashion (see what I did there?). The VantaBlack FAQ explicitly raises the question of its suitability for garments and promptly answers "no" - the nature of the material (forest of tall thin nanotubes) makes it handle any direct physical contact with anything very poorly. You can shake it all you want but you can't brush it. Your bottom would start becoming non-black as soon as you sat down.
"very unlikely it would absorb 100%"
They claim no such thing. In fact, the second decimal digit of the percentage of light it absorbs in the most visible band is not a nine.
Oh, and forget about getting your hands on it - they don't sell to private buyers, they don't like to sell outside UK etc; basically they don't sell to anybody except maybe NASA (and the Man In Black, natch). Oh, and it's expensive as f##k for larger-than-a-sample surfaces...
Low light optical sensors are the same as any digital camera. The larger the lens the narrower the field of view, just like my telescope. Multiple sensors such as a fly eye cannot see in the dark. Flies do not fly in the dark. Time exposures will work if the car travels at 0.00001 meters per second.
It is impossible for Vantablack to be "invisible" to radar. It may be transparent to radar, but there is plenty of metal in a car which is most assuredly NOT transparent to radar. Are you an investor in VantaBlack or something? Sounds like you have falling for the marketing hype.
If it was really possible to make it invisible to radar we'd be coating military planes in it which would no longer need stealth designs (which reduce but do not totally eliminate radar signature) as you could paint a B52 with it and make it invisible to radar if your gullible faith was correct!
Vantablack is not available for sale to private individuals. If a car could be fooled by it so could a human driver who could be more easily fooled as people lack the array of detection devices available to a mechanical object. I think it is an interesting but moot point. The car or person would operate in the same way, they may either detect the presence of the object in relation to its surroundings (why is there a person shaped hole in front of me) or not at all in which case the probably tragic outcome would be the same.
Zuckerberg isn't exactly impartial in this discussion. The more gets people used to "AI" and algorithms making decisions the easier it gets for his empire to swallow more data while paying less people to work for him.
The point Musk is making is that already algorithms are at work making actual decisions that have real-world effects on people. Like refusing loan applications or deciding if the get parole. The fact that these algorithms are opaque, subject to hidden bias and not understood well enough to be challenged by those affected by them is a distrubing dystopian "computer says no" scenario. And it's already slouching towards Bethlehem.
Musk makes a good point. Zuck's just scared it's going to hurt his bottom line and creepy data-fetish.
Nothing more.
Nothing less.
If he can get his sticky little mitts on more of your data than creepy Eric Schmidt that's a bonus for him.
He's as much a data fetishist as any member of the Home Office cabal, hence icon.
... and you will find they both have some vested interest to cover.
The question to ask is how much money would Musk lose if someone other than one of his companies developed a real AI? This could well be the root of all his fears. Zuck, on the other hand, appears to understand that a real AI is not going to change his business in a way that would cost him money.
No doubt Musk wouldn't hold back any of his researchers, because he'd tell himself he's smart enough to control it. He's one of those egotistical bastards who assumes that anyone who disagrees with his viewpoints "just doesn't understand". We seem them everywhere from academics, to business, to politics. Yes, he's had a lot of success, but that doesn't automatically make him a genius, anymore than Zuck or Jobs were automatically geniuses because of their success.
Why would you use sentient AIs to do mundane tasks, when you don't need them for it? That would be like hiring a Nobel Prize winner to mop your floor.
This is a non-issue, because we don't even have anything close to proper AI yet and we already have computers doing mundane tasks. The only thing preventing machines from replacing every janitor in the world isn't the need for more intelligence, but rather better/cheaper robotics.
If we do get sentient AI we'll still have all the non-sentient machines around to do those mundane tasks like mopping, driving, making crazy tweets about fake news, etc.
It seems like the dystopian duo are talking about two different things. Z seems to be talking about more capable robotics while M seems to be talking about sentient AI. Z, however, is ignoring the effect of better robotics on employment; as the robotics gets better less staffing is needed. Hopefully, the increased productivity will spurn enough economic growth to absorb the displaced workers. These robots are truly sentient, though they seem to be so. They are only capable in a well defined problem space. M seems to be worried more sentient AI which might decide humans are superfluous.
Why would you use sentient AIs to do mundane tasks, when you don't need them for it?
Because tasks we consider "mundane" are actually extraordinarily difficult.
Consider for example the job of cleaning a room. I don't mean a roomba running over the carpet - I mean proper cleaning, such as picking up fragile objects and dusting them. Identifying different surface types and applying (and removing) the correct cleaning substances, and so on.
"Brain the size of a planet and they ask me to pick up a piece of paper" may be very close to the truth.
... I sort of agree with Mr Musk.
Not because of AI - but because of AS. Artificial Stupidity.
So far (at least, as far as I am aware), systems aren't self-coding. So they're coded by humans. Generally they're coded by humans to take action without human intervention. Thing is, they're coded _by_ humans to do what the humans think should be done in a situation that hasn't happened yet, in circumstances that are not yet known. And lord knows, we humans don;t exactly have a perfect track record of making those decisions when events _do_ happen and the circumstances _are_ to some degree known.
So humans code, and they code in line with their own prejudices and assumptions. Hence, AS. And results more potentially Musk-y than Zuck-y - though Sucky might well be the case... :-(.
The problem with automating any job is that once the automation reaches, say 85% as good (in quality), then that will be considered sufficient and the remaining 15% will be defined away (oh that was never part of this job / those previous failure statistics are flawed/inappropriate/unavailable ).
"As the ex-Google machine learning expert Andrew Ng has sensibly pointed out, fearing a rise of killer robots is like worrying about overpopulation on Mars. You have to get there first."
I suspect that part of Musk's caution is based on the idea that we are unlikely to know exactly when we "get there". For all I know, we're already there, and the genie simply won't be put back in the bottle.
We have certainly already built systems that are inimical to human interests and extremely difficult to dismantle, insofar as they are deeply embedded in social, economic, and political structures and will require little less than a revolution to undo. Maybe Musk was doing the Hari Seldon-esque thing and simply playing out forces over a 20-50 year span; finding that these forces and systems conspire to the inevitable development of an AI that is both uncontrollable and hostile to (some) human life.
In this matter, I'll trust Musk over Zuckerberg.
Totally agree. Everything from fast food obesity and stupidity as well as smartphone ADHD we are on a steep downhill water slide. Then there is the idea of shooting things automatically with no finger on the trigger. They say Life goes on but it also ends.