back to article Relax, Amazon workers – OpenAI-trained robo hand isn't much use (well, not right now)

Human hands are surprisingly dexterous: they can knit clothes, stuff delivery packages with things, play the piano, and so on, albeit with practice. Yet if you're worried machines are going to take these pleasures away from us, be assured us mortals can, for now, pick up these skills faster than robots can, judging from the …

  1. Destroy All Monsters Silver badge
    Windows

    Excellent (what's with the trans bluhair style?)

    The researchers hope that this will eventually lead to progress in building robots that can cope with our volatile and mutable reality while helping humans with chores at home and at work.

    It's a hand! Animated by a neural network, which is a thing that does processing at insect capabilities.

    There is a whole (situated) robot to which it must be attached which doesn't exist. (cue Miles Bennett Dyson in the Cyberdyne vault)

    There is much to do.

    As Rodney Brooks writes

    Consider AlphaGo, the program that beat 18 time world Go champion, Lee Sedol, in March of 2016. The program had no idea that it was playing a game, that people exist, or that there is two dimensional territory in the real world–it didn’t know that a real world exists. So AlphaGo was very different from Lee Sedol who is a living, breathing human who takes care of his existence in the world.

    I remember seeing someone comment at the time that Lee Sedol was supported by a cup of coffee. And Alpha Go was supported by 200 human engineers. They got it processors in the cloud on which to run, managed software versions, fed AlphaGo the moves (Lee Sedol merely looked at the board with his own two eyes), played AlphaGo’s desired moves on the board, rebooted everything when necessary, and generally enabled AlphaGo to play at all. That is not a Super Intelligence, it is a super basket case.

    We need more (unless you are doing task-oriented kill problems for the military for example):

    Theoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution, Judea Pearl, 2018-01-15

    Current machine learning systems operate, almost exclusively, in a statistical, or model-free mode,

    which entails severe theoretical limits on their power and performance. Such systems cannot reason about interventions and retrospection and, therefore, cannot serve as the basis for strong AI. To achieve human level intelligence, learning machines need the guidance of a model of reality, similar to the ones used in causal inference tasks. To demonstrate the essential role of such models, I will present a summary of seven tasks which are beyond reach of current machine learning systems and which have been accomplished using the tools of causal modeling.

    (Also get Judea Pearl's "The Book of Why", it's full of fun)

    1. Anonymous Coward
      Anonymous Coward

      Re: Excellent (what's with the trans bluhair style?)

      Hannah Fry, one of BBC Radio 4's regular mathematicians, made a very good point. What we've actually seen in recent years is a revolution in statistics on large datasets, which many people are describing as AI (her point was something like that, apologies if off the mark).

      What I think is going on is that there's a ton of researchers, software engineers, companies who are using the AI label to secure research grants, salaries, investment and customers. AI is an easier sell than "advanced statistics"...

      It doesn't bode well for the future. If you sell something that is based on statistics, people intuitively know that it is limited in some way; the answers, actions, are probably correct but not guaranteed.

      Sell the same thing as being AI and people are currently expecting a perfection they're not going to get. They may even change road laws and start building transport infrastructure, before realising it's a con.

      People are already disenchanted by the rubbish results returned by all search engines, the myth of self driving cars, the uselessness of "expert" medical systems, etc.

      1. Anonymous Coward
        Anonymous Coward

        Re: Excellent (what's with the trans bluhair style?)

        Well said.

        IMHO, current so called AI is far from the definitions laid down by several generations of SF writers some of whom were scientists.

        They are in many cased glorified 'DSS', Decision Support Systems.

        Real AI has to be capable of many layers of Fuzzy Logic. The sort of thing we do all the time without thinking. IT has to be able to infer as to what might happen and handle those events that are not in its rulebase. The...

        What if there is a bit of this, a tad of that, a touch of something else and a following wind?

        Could anyone define what a 'bit of this, 'tad of that' and 'touch of something else' means to a computer? But we do it all the time and without thinking. Just think about how a chef creates a dish. If a spice is not as strong as the last batch, the chef will taste it and compensate for it automatically. Eventually a machine will be able to do that but the little touches of flair will be missing.

        1. deive

          Re: Excellent (what's with the trans bluhair style?)

          Yeah, all of what people are calling A.I. is machine learning.

          This is interesting, new(ish) and quite different from the expert system stuff, but is a long way off A.I.

        2. onefang

          Re: Excellent (what's with the trans bluhair style?)

          "The sort of thing we do all the time without thinking."

          We are thinking about these things though, just at a lower level. A robot has to think hard about walking on two legs, something that a human seems to do "without thinking". Take away the humans brain and they'll suddenly forget how to walk. And how to breath, ...

      2. Korev Silver badge
        Boffin

        Re: Excellent (what's with the trans bluhair style?)

        I'm a big fan of Dr Fry, her programme, Contagion was great. It even had a peer-reviewed paper accompanying it

  2. John Smith 19 Gold badge
    Unhappy

    "we don’t have an entirely accurate model of the hand, "

    Because the real world is not precise.

    Consider however how long a human takes to learn these fine motor skills.

    It's called infancy.

    but any system that cannot operate without expecting perfection is in big trouble

    1. jmch Silver badge

      Re: "we don’t have an entirely accurate model of the hand, "

      "Consider however how long a human takes to learn these fine motor skills.

      It's called infancy."

      An infant can learn to manipulate that cube in 2-3 years at a far higher proficiency than took them 100 years of simulated training

  3. DrBobK
    Headmaster

    Artificial Behaviour

    This stuff is far more interesting than 'AI' - in inverted commas because, as the comments above say, 'AI' isn't simulating intelligence. Hannah Fry was right - current AI is stats. In the 80s some statisticians pointed out that back-error propagation in multilayer perceptrons (the de-rigeur AI of the time) was just an implementation of a statistical procedure called projection pursuit (super dooper regression).

    Aiming for more realistic goals than intelligence, such as learning to behave efficiently in a complex environment (as the hand people were doing) is, to me, an intellectually much more interesting goal. Reinforcement learning with temporal discounting (pioneered by Rich Sutton and Andy Barto in the 80s, to much less fanfare than the back-prop business) certainly looks like the way to achieve success (and the way that we and animals likely do it) especially when combined with a system that learns to optimise the representation of the environment from which variables are entered into the learning system (I think I wrote something saying this in the 80s, after which I gave up biologically relevant neurocomputation stuff and did real human neuropsychology - working with patients instread).

  4. jmch Silver badge

    80 seconds?

    "by success, they mean "the number of consecutive successful rotations until the object is either dropped, a goal has not been achieved within 80 seconds, or until 50 rotations are achieved.""

    An adult could match a pattern in probably under 2 seconds. Allowing the robot 80 seconds per rotation seems pretty excessive. How many of the 'real-world' consecutive 'successes' would it be if you gave it a more realistic limit, say 5 seconds?

  5. Korev Silver badge
    Boffin

    In other words, in the simulation it did fine – but with the effects of gravity, imperfections in the mechanisms, and other real world effects, the software turned into a butterfingers. Indeed, during testing, the robotic hand broke down dozens of times.

    Variables

    The machine-learning software was trained in a range of simulated environments where some of the variables such as surface friction, the size of the object, lighting conditions, hand poses, textures, and even the strength of gravity were changed randomly. The idea was to at least attempt to prepare the model for the unpredictable universe in which we live.

    Surely gravity is a constant force? Or am I missing something obvious?

    1. e_is_real_i_isnt

      Gravity is a local constant, but acceleration as the summation of gravity and other motions is not, as can be experienced in sipping from an open cup while riding in a car over a potholed road.

    2. DropBear
      Trollface

      "Surely gravity is a constant force?"

      Maybe they were angling for a NASA / ISS-Robonaut grant...?

    3. jmch Silver badge

      "Surely gravity is a constant force? Or am I missing something obvious?"

      I'm sure a human could manipulate that cube in 0.8 or 1.2 g after a couple of minutes of adaptation. I think they experiment with different training gravities to make the model more robust

  6. Flakk
    Joke

    PPO thus uses reinforcement learning, and Dactyl learned the best strategies to manipulate the cube by chasing points as it completed tasks – with a five-point bonus for success and a 20 point penalty for failure.

    Point penalty, pfah. They should be teaching these robot hands properly, with swift raps across their knuckles with rulers when they fail!

  7. Mike 16

    In other news

    Perhaps we should consider other paths to cheaper, more plaint worker, or not:

    Robot Orangutan Vs Wild Orangutan Sawing Duel

    https://www.youtube.com/watch?v=YFR4a9vcri4

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like