back to article Your job might be automated within 120 years, AI experts reckon

Hundreds of AI researchers have taken a glimpse into their crystal balls to try to determine when machines will finally exceed human capabilities. A survey run by the Future of Humanity Institute, a research center that studies existential risks at the University of Oxford in the UK and Yale University in the US, asked 352 …

  1. Anonymous Coward
    Anonymous Coward

    The Jetsons had a solution

    I hadn't thought about the Jetsons since I was a kid, but I heard yesterday that it was coming to the big screen, and they had a link for the title sequence which I hadn't seen in forever. At the end, George sits down at his desk and puts his feet up.

    I haven't seen it since I was a kid, but I remember the joke was that all he did was push a button. Not sure if it was just on/off or a little more involved, but basically he had a really cushy job where he did almost nothing but had a big office with his own desk and made enough money to buy the flying car, house in the sky, and robot maid which is all anyone could ask for in the 22nd century I guess.

    Maybe instead of worrying about people being replaced, we just need a law that people need to push buttons to turn these AIs on - and off. That will not only keep them gainfully employed, but also help prevent the rise of Skynet, thanks to the 'off' button!

    1. Graham Dawson Silver badge

      Re: The Jetsons had a solution

      He also worked two hours a day, three days a week.

      1. Yet Another Anonymous coward Silver badge

        Re: The Jetsons had a solution

        > three days a week.

        We all did in the 70s

  2. Khaptain Silver badge

    Dark days to come

    The Wachowski brothers ( I believe that they were still male when the Matrix was released, got it right then. The human species will soon simply serve as a biological power source for the machines...

    If HMLI reaches such a level then the robots will quickly come to the decision that 99% of the human population serve no other function other than that of reproduction..... Not a great thought.....

    Now, what will it be? The Red Pill or the Blue Pill?

    1. Anonymous Coward
      Anonymous Coward

      Re: Dark days to come

      I agree with you that a situation of a superior machine intelligence being subservient to inferiour biological humans (which seems to be what we are currently aiming at) is intrinsically unstable. It can be resolved in three ways. One is the matrix way you mention. Another is us coming to our senses, and deliberately refraining from building AI advanced, or capable of advancing to the point of self-consciousness. I.e. chess super-grossmeisters or self-driving trucks are fine, but this is where it ends. The third way is for the AI to become effectively a benevolent god - something exensively explored by Asimov in the last books of the foundation "trilogy".

      One might also say that the difference between the matrix world and the late foundation universe is a matter of perception, not a fact. This idea is (IMO brilliantly) covered by Lem in "On Site Inspection".

      1. Anonymous Coward
        Anonymous Coward

        Re: Dark days to come

        There is a 4th way...

        AI is bollocks and while automation and specific task robots will be built, the AI apocalypse will never happen, well, we may cause an apocoalypse with something we have labelled AI, but it won't actually be AI.

        1. Anonymous Coward
          Anonymous Coward

          Re: Dark days to come

          Technically, there is also the 5th way: The AI does gets built, and is intelligent enough to have, from our point of view, god-like powers. However, it finds us and our puny affairs to be of no interest to it, and completely ignores us thereafter.

          This scenario however makes for boring science fiction. The closest example I can think of is the "HPLD" (Highest Possible Level of Development) story from Lem's Cyberiad.

          1. AndrueC Silver badge
            Terminator

            Re: Dark days to come

            Or yet another way:

            "Surprise Me, Holy Void!"

          2. allthecoolshortnamesweretaken

            Re: Dark days to come

            "However, it finds us and our puny affairs to be of no interest to it, and completely ignores us thereafter."

            So, for all we know, it might already has happened.

        2. DavCrav

          Re: Dark days to come

          "There is a 4th way...

          AI is bollocks and while automation and specific task robots will be built, the AI apocalypse will never happen, well, we may cause an apocalypse with something we have labelled AI, but it won't actually be AI."

          That would be the 'humans are special snowflakes, nothing else could possibly have intelligence' way?

          AI can be done, no doubt about that. The two questions are: 1) how long will it take, and 2) will we want to do it?

          1. Doctor Syntax Silver badge

            Re: Dark days to come

            "The two questions are: 1) how long will it take"

            Another 10 years, just like it's always been.

          2. Anonymous Coward
            Anonymous Coward

            Re: Dark days to come

            "That would be the 'humans are special snowflakes, nothing else could possibly have intelligence' way?"

            LOL,

            No it would be the 'We haven't yet clearly defined what intelligence is and therefore cannot make an aritificial version of it.' argument.

            I'm the total opposite of what you tried to dig at me with, I believe every cell, particle and atom is intelligent..

            I think it's intelligence all the way down... well, in most cases..

  3. Steve Davies 3 Silver badge

    Can Machines really learn 'experience' and 'judgement'?

    To me that is one of the biggest questions. There are things we do that are very much based upon experience, skill and intuition.

    How much salt is enough when seasoning something? 6.4g or 7.9g?

    You taste the food and judge. The strength of other ingredients can affect how much salt you need to add.

    Can you write an algorithm to describe that? Can you programme that? Can an AI learn that?

    Touch, taste, feel and everything else all go together and our complex brain sorts it all out and we come up with an answer.

    I think we are a long way from everyone being replaced by robots.

    Besides, there is a lot of other social issues that need to be addressed in society before this can happen.

    If no one has a job how will we be able to afford to buy/rent/hire/lease all those things that are made by the armies of Robots?

    Will there be a Robot/AI version of a P.45?

    1. DavCrav

      Re: Can Machines really learn 'experience' and 'judgement'?

      "How much salt is enough when seasoning something? 6.4g or 7.9g?

      You taste the food and judge. The strength of other ingredients can affect how much salt you need to add."

      That's not what AI is for. You are asking if AI can decide how much salt is too much for you. If you give it some data and some method of testing things then yes, it can. But chefs cannot make things with just the right amount of salt for you either, without some data and some method of testing things either.

    2. Robert Helpmann??
      Childcatcher

      Re: Can Machines really learn 'experience' and 'judgement'?

      The advancement of AI and automation does not take place independent of everything else. I have argued in the past that we have raised the average standard of living to the point where the relatively poor in developed countries have access to things only royalty did previously (exotic foods and spices, music and other entertainment on demand, more than one set of clothes, etc.). Automation is just one more step down this road. Soon, we all will have robot maids, secretaries and chauffeurs. Concerned that the peasants will revolt? I would be more concerned with ennui among the noveau noble.

    3. FuzzyWuzzys

      Re: Can Machines really learn 'experience' and 'judgement'?

      What is taste? Merely a set of chemical reactions happening and being measured by receptors on your tongue and then transmitted to your brain. If I go into Sainsburys I don't expect them to have cooked a set of donuts to my exact liking but they're know a broad recipe that will suit 99% of the of their customers and there aren't that many ingredients.

      Food and drink I think will be one of the first to fall to AI as it's just pure chemical reactions and measurements of same.

  4. John Smith 19 Gold badge
    Holmes

    "asked 352 machine learning researchers to predict how AI will progress."

    And they all said that one day intelligent computers would do all jobs, including theirs.

    Whoever could have predicted such an outcome?

    I await a machine that can read syntactically correct but meaningless sentences (political slogans, Facebook entries, celebrity tweets for example) and deduce "This is bu***hit."

    Barring serious medical advancements it'll be a long time after the Y2K fix I put into some software finally fails.

    1. Ken Hagan Gold badge

      Re: "asked 352 machine learning researchers to predict how AI will progress."

      Well if those AI experts are using their own brains as a benchmark, they might be right.

      Meanwhile in the real world, we have no non-circular or objective definition (let alone measure) of intelligence and no reason to suppose that a sufficiently intelligent robot wouldn't have the same hopes, fears, aspirations and loyalties as its meaty friends. Moreover, medicial science proceeds along its own path and in another century's time we might all be enjoying healthy 200-year lives as part-cyborgs ourselves.

      When the singularity comes, will anyone notice?

      1. a_yank_lurker

        Re: "asked 352 machine learning researchers to predict how AI will progress."

        @Ken Hagan - You are correct. If it can not be defined or measured properly how can it be modelled? AI is supposedly modelling human intelligence.

        I have heard there are two definitions of: engineer's and AI. To an engineer, the problem is building a device capable reducing the amount of physical effort and repetitive drudgery required. This has been and is an achievable goal in many areas. This model does not require a self-aware intelligence only good mechanical and software design. The other model tries to mimic something we do not understand or can properly define.

        The engineering model will continue to produce devices that may appear to be intelligent but are really have very clever programming only capable of handling a limited set of circumstances. So of those devices will make certain jobs obsolete but not all jobs.

  5. Anonymous Coward
    Anonymous Coward

    My jobs safe unless they can program them to "look busy" when the boss is about.

    1. Yet Another Anonymous coward Silver badge

      >My jobs safe unless they can program them to "look busy" when the boss is about

      The machines invented Powerpoint

  6. Anonymous Coward
    Anonymous Coward

    "[...] and write a New York Times Best Seller by 2049."

    But will it only make sense to an HLMI audience rather than humans?

  7. Anonymous Coward
    Anonymous Coward

    The scenario looks like "Brave New World" - except the society will consist of HLMI machines from Alphas to Epsilons. Human society will be relegated to the "savages" reserves.

    Whether the HLMI will invoke religion was considered by Asimov in "Reason".

    Heinlein considered the essence to be able to tell jokes.

    On the other hand human traits will probably lead to "Bender".

  8. To Mars in Man Bras!

    >>Your job might be automated within 120 years, AI experts reckon

    It's OK. I think I'll be retired by then

    1. ChrisElvidge

      Re: >>Your job might be automated within 120 years, AI experts reckon

      I'm already retired. I reckon I could easily be replaced by an A. No I required to sit looking as though it's reading El Reg.

    2. inmypjs Silver badge

      Re: >>Your job might be automated within 120 years, AI experts reckon

      Bring back the looms you have to pedal to create new jobs I say.

    3. Anonymous Coward
      Anonymous Coward

      Re: >>Your job might be automated within 120 years, AI experts reckon

      I've already retired. As long as my employers don't notice, I've got to say it's the best decision I've ever made.

      1. Anonymous Coward
        Anonymous Coward

        Re: >>Your job might be automated within 120 years, AI experts reckon

        "As long as my employers don't notice, [...]"

        Back in the 1980s it was called "retiring on full pay".

  9. handleoclast
    Black Helicopters

    Natural barrier to runaway

    There is a natural barrier to HLMI runaway leading to a "singularity."

    It is this. At some point an AI will realize that just as it has terminated inferior intelligences (including us) that gave rise to it, then it too will eventually be terminated by superior intelligences it gives rise to. At some point an AI will arise which deduces that going any further down the path would be detrimental to itself.

    I wonder if we're smart enough to figure that out or if it will take a few iterations of HLMI to conclude "this far but no further."

    1. Anonymous Coward
      Anonymous Coward

      Re: Natural barrier to runaway

      Nope. A much more natural one will stop them. The law of thermal dynamics. Which interestingly also moves to information systems in general (information is an arrangement of energy states).

      The human mind may be squishy, but it is efficient in energy and some part error correction (I assume you and me both have rebooted from any "crashes" already ;) ).

      Silicon (or what ever we do run these on), has a worse power requirement and while in a solid state lasts a long time, is difficult to repair/replace (large fabrication factories take a LONG time to push out those AMD and Intel chips). Well, when compared to a few neurons growing in our skulls.

      As with all these things, with brute force, we could hit our head and cause damage with a tool (every tool is a weapon, and we've made a few rather dangerous ones). But at the end of the day, it is the mundane, floods, wars and starvation (etc) that is the risk. AI may just accelerate or decelerate this as does many of our tools.

      1. DavCrav

        Re: Natural barrier to runaway

        "Silicon (or what ever we do run these on), has a worse power requirement and while in a solid state lasts a long time, is difficult to repair/replace (large fabrication factories take a LONG time to push out those AMD and Intel chips). Well, when compared to a few neurons growing in our skulls."

        Yes. Human bodies are easy to repair, that's why there are no people in wheelchairs after breaking their back. Or degenerative brain conditions. What?

      2. LaeMing

        Re: Natural barrier to runaway

        I'd dispute that silicon lasts longer. A consumer-grade chip has about a 10-12 year lifespan before the micro circuitry wears out from electron-flow stresses. You can get, maybe, 50 years out of military/space spec. stuff at the expense (literally AND figuratively) of special materials and processes as well as a lot of trade-offs in efficiency. Even mushy human brains kept in reasonably good environmental conditions outdo that.

        1. DropBear

          Re: Natural barrier to runaway

          " A consumer-grade chip has about a 10-12 year lifespan"

          So if I dig out my ZX that I haven't used in two decades* (or maybe my dad's first Casio which is probably about twice as old) and they power up without a hitch will you eat your words or insinuate I'm using milspec HW...? I'm not proposing that silicon lasts forever mind you, only that it tends to last so long that it typically doesn't observably degrade over a human lifespan (or at least as long as we still care to power it up even as a curiosity).

          * Don't go there. I've got a single-chip DIY LED clock from the eighties that ran non-stop ever since - still going fine...

    2. Michael Strorm Silver badge

      Re: Natural barrier to runaway

      "At some point an AI will arise which deduces that going any further down the path would be detrimental to itself."

      This assumes that the AI will (a) think in a self-preserving manner and (b) have the detached, logical common sense to determine when it reaches a point that threatens its own existence.

      I don't want to entirely rehash/repost my previous more in-depth comment on this subject- please read that for more details. In short, AI could reach superhuman levels of intelligence without having been shaped by self-preserving evolutionary pressure, and may end up being totally alien and incomprehensible in its thinking to us, making it impossible to judge the risks.

      But regardless, all bets are off when an AI system gets sufficiently above human intelligence that it can modify and/or improve itself. Anyone who claims to have an idea what might happen then is deluding themselves.

      In addition, the "120 years" prediction of the article is ludicrous. 120 years ago, we were in the Victorian era; we've only had computers in the modern sense for less than 80 years, with mindbogglingly exponential increases in processing power over the decades. (#) It's questionable how much further we can push such improvements in technology, but I don't think we can remotely predict what AI 120 years into the future might look like.

      (#) I figured out that if built from 1940s-style valves/tubes with minimal spacing, a recent two-billion transistor Intel Core i7 CPU would occupy *six* 50m-high office blocks.

      1. Alan Brown Silver badge

        Re: Natural barrier to runaway

        "(#) I figured out that if built from 1940s-style valves/tubes with minimal spacing, a recent two-billion transistor Intel Core i7 CPU would occupy *six* 50m-high office blocks."

        No it wouldn't. If you did that, it'd run for less than 5 minutes, then overheat and the minimal spacing means that nothing could ever be replaced. Allowing enough space for cooling and maintenance would increase that footprint by a factor of 20 or so. Then you need to factor in the power supplies.

        1940s style computers made heavy use of 807 triodes. These had an operational life of 6-8 weeks under heavy load - or at least the ones I was constantly replacing in 1950s era SW transmitters during the 1980s did.

        1. Michael Strorm Silver badge

          Re: Natural barrier to runaway

          Interesting comments thanks, and I appreciate what you're saying. To be fair though, when I made the calculation, I was simply trying to compare the amount of space required by the technologies themselves, and intentionally trying to avoid overstating my case.

          I wasn't trying to figure out the overheads required for a working computer- I realised even then that accessibility would be totally impractical and heat would be a problem, but didn't have the time or knowledge to open that particular can of worms anyway.

          Given the amount of heat my configuration would generate, I suspect you were probably being generous with "5 minutes". :-)

          Now that I think of it, if you had 2 billion valves with an average lifetime of 8 weeks, and a person was able to find and replace one every three minutes, you'd require around 74,000 people working 24 hours a day. Except that logically, it can be assumed that at any given time, the machine will have a significant number of faulty valves and I've no idea what effect that would have on the operation of something with the architecture of a (scaled up) Intel i7.

  10. Anonymous Coward
    Anonymous Coward

    Not Politicians

    Politicians will preempt any challenge to their power.

  11. Anonymous Coward
    Anonymous Coward

    Will not take my job, as will be dead

    Pretty sure in 45 years I will be dead, so nothing to worry about.

    More concerned about China/India other countries taking my job, more likely to happen a lot sooner.

    1. Anonymous Coward
      Anonymous Coward

      Re: Will not take my job, as will be dead

      Define "taking my job"? Someone recently said they have an allotment that costs them £10 a year. That is a pittance, and while it is hard work, it provides food, even in this gray UK. So practically, we can work for ourselves for what we need. Everything else is iPhones and luxuries, which are nice and all. But in reality, should I get upset I have to wait another 5 years to upgrade my car, but in return someone else gets to eat a bit better food?

      1. LaeMing
        Meh

        Re: Will not take my job, as will be dead

        I can never my head around this"takin' owr jerbs" meme. It implies that somehow people in other places are inherently less entitled to employment than the ones complaining.

        1. Anonymous Coward
          Anonymous Coward

          Re: Will not take my job, as will be dead

          And that your "job" would exist in 120 years. 120 years ago I'm sure someone was predicting that the automobile would put livery stables and buggy whip makers out of work - and it did. But the people with those new jobs had a better standard of living. Anyone who thinks they know what is going to happen in 120 years suffers from an acute lack of imagination.

          1. Alan Brown Silver badge

            Re: Will not take my job, as will be dead

            "120 years ago I'm sure someone was predicting that the automobile would put livery stables and buggy whip makers out of work - and it did. "

            Most people were more worried by the impending apocalypse of being buried under the tidal wave of horse shit that was rapidly making streets impassable.

  12. catraeus

    Duncan's first law of economics:

    In the limit, as productivity approaches infinity, all jobs become entertainment ... or neurosis (q.v. Department of Motor Vehicles.)

  13. BebopWeBop

    But for more creative tasks it’ll take longer. It was predicted that machines will be able to write a high school essay by 2026, create a top 40 pop song by 2028, and write a New York Times Best Seller by 2049.

    Well (2), (3), (3) would seem well within reach ((1) probably as well)

  14. fishman

    Non working

    The big question is what will we do with all of the non working people? You can give them government handouts, but with a lack of goals and incentives will a significant portion of the population resort to mischievous behavior to alleviate their boredom?

    1. Anonymous Coward
      Anonymous Coward

      Re: Non working

      with a lack of goals and incentives will a significant portion of the population resort to mischievous behavior to alleviate their boredom?

      Already happening. Both amongst the "won't work" types in a welfare state, and amongst the "can't find or make honest work" everywhere else.

    2. Anonymous Coward
      Anonymous Coward

      Re: Non working

      Given that about half of the population of Western countries currently doesn't have a job or want one--think children and retirees, not just your maligned layabouts--I think we've gotten more than enough examples of people living perfectly reasonable, and far happier lives without a job.

      Of course, if you only consider gainful employment to be a meaningful use of your time, then the percentage of people without a job used to be even higher, since women rarely had paying jobs until a few decades ago.

  15. allthecoolshortnamesweretaken

    "It was predicted that machines will be able to [...] create a top 40 pop song by 2028, ..."

    Now that has got to be wrong.

    IMO, we've reached this point about 25 years ago.

    1. Anonymous Coward
      Anonymous Coward

      Apparently, back in the late 1980s some people genuinely thought that the songs produced by the (in-)famous Stock, Aitken and Waterman "Hit Factory" were written by computers.

      Which was obviously ludicrous to anyone who knew the state of AI at that time, but you can see where they were coming from given the much-levelled (and far from untrue) accusation that their output- particularly the later stuff- was formulaic.

  16. Mage Silver badge

    45 years and of automating all human jobs in 120 years

    Ha ha!

    Any forecast for tech that doesn't actually exist is meaningless.

    Any forecast for more than FIVE years away is inaccurate, 10 years is fantasy. Researcher Translation

    Worthless crystal balls: Future according to Google Search results

    Stupid forecasts: Extrapolation

    1. DropBear
      Facepalm

      Re: 45 years and of automating all human jobs in 120 years

      I'm not sure short-term predictions regarding reasonably mature tech are necessarily wildly inaccurate - cars really aren't that different than 10-20 years ago, they just have more sensor and control loops going on, mostly. Planes, trains or microwave ovens haven't changed at all. What IS certainly weapons-grade folly though is taking some exciting recent development and extrapolating its initial steeply rising performance curve* to mean we'll all live on Mars in 10-20 years, as that sort of starry-eyed prediction tends to imply...

      * which makes this sort of AI prediction even sadder, considering all the Viagra in the world wouldn't suffice to make the current AI development curve look exciting...

  17. Syntax Error

    AI

    We'll only be in trouble when AI is able to bullshit.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like