back to article Mything the point: The AI renaissance is simply expensive hardware and PR thrown at an old idea

For the last few years the media has been awash with hyperbole about artificial intelligence (AI) and machine learning technologies. It could be said that never, in the field of computer science, have so many ridiculous things been said by so many people in possession of so little relevant expertise. For anyone engaged in …

Page:

  1. Denarius
    Pint

    at Last

    well put sir !

    1. Adrian 4

      Re: at Last

      tbf, el reg has been saying this for a while.

      I would love to hear a comment on the state of the game from an actual AI researcher, as opposed to a marketroid. And I do hope they're all grabbing all the funds they can while the bubble lasts.

      1. Alister

        Re: at Last

        I would love to hear a comment on the state of the game from an actual AI researcher

        Um: Andrew Fentem has worked in human-computer interaction research and hardware development for over 30 years

        Not good enough for you?

        1. Ian Johnston Silver badge

          Re: at Last

          Neither of these worthy things has anything much to do with AI.

        2. Anonymous Coward
          Joke

          Re: at Last

          I would love to hear a comment on the state of the game from an actual AI researcher

          Um: Andrew Fentem has worked in human-computer interaction research and hardware development for over 30 years

          Not good enough for you?

          No, you misunderstand. He wants to hear from an 'AI researcher' not a 'researcher in the field of AI'.

      2. LucreLout
        Trollface

        Re: at Last

        I would love to hear a comment on the state of the game from an actual AI researcher, as opposed to a marketroid.

        I assume a marketroid is like a marketDroid, only with a very small penis and lots of anger?

      3. Daniel von Asmuth
        Pint

        A lean, mean machine that nobody understands

        That would do nothing useful, but people might vote it into the White House.

      4. Badbot1999

        Re: at Last

        As an actual AI researcher who has actually met Adrian Thompson, albeit many years ago... I kinda agree with some of the sentiment of the article: the hype is out of control. But ML research *has* qualitatively improved since the 80s in ways that shouldn't be dismissed. The Bayesian years from mid 90s to 2006 ish radically improved the robustness of robotics. The deep learning era since 2006 has radically enhanced sensing and perception abilities leading to a host of ubiquitous applications. It's not just faster; we now understand the mathematical properties of stochastic gradient descent and why convolutional networks reduce the number of parameters to learn (and why this helps). We understand how gradients need to flow to enable deeper networks to train efficiently. New architectures such as generative adversarial networks are more robust. Deepmind in particular are genuinely pushing the envelope of ML agent based systems with architectures inspired by neuroscience. Yes, their capabilities are exaggerated and extrapolated, but it's unreasonable to claim that no progress is happening.

    2. big_D Silver badge

      Re: at Last

      Yes, I thought I was alone in thinking that the AI craze was old hat, just with more processing power.

      I remember being enthralled by Thompson's discovery at the time.

  2. Alister

    Thank you!

    Thank you for a reasoned, common sense article on the realities of AI.

    And thank you particularly for reminding me about Thompson's designer, I too remember reading about it in the 90s, and being fascinated that the circuit evolved to use properties of hysteresis and electromagnetism within the FPGA.

    It seems that this, and things like Aleksander's WISARD discrete neural nets are being ignored in favour of software based solutions, and yet they were, even in the 80s - 90s, achieving things that software based AI still struggles with.

  3. jpo234

    To quote Stalin: Quantity has a quality of its own.

    We just see that the old ideas with new hardware can indeed produce new results.

  4. Paul Johnston
    Pint

    Have another drink on me!

    Finally someone pointing out the Emperors Clothes are not all they are made out to be.

    Being at a University we see a lot of the Futurist stuff and not many people are willing to say "hang on!"

  5. Anonymous Coward
    Anonymous Coward

    Eliza.

    That is all.

    1. To Mars in Man Bras!
      Facepalm

      Or Sophie

      You almost beat me to the punch. I came here to pour cold sick on the idiotically twee stories that keep popping up telling us how AI bots "think" things or "want" stuff.

      Here's a classic example from Re-Tweet Central [formerly known as BBC News]:

      http://www.bbc.co.uk/newsbeat/article/42122742/sophia-the-robot-wants-a-baby-and-says-family-is-really-important

    2. Jim Andrakakis

      Obligatory reference:

      http://ars.userfriendly.org/cartoons/?id=20001012

    3. Uncle Slacky Silver badge

      Don't forget PARRY!

      https://en.wikipedia.org/wiki/PARRY

    4. Anonymous Coward
      Anonymous Coward

      > Eliza.

      > That is all.

      Tell me why you think that Eliza is all there is?

  6. Paul 25

    Nicely put.

    Reminded me of a line from Dr Hannah Fry about this, she described it as less a revolution in intellegence, and more a revolution in the scale of computational statistics, that none of these systems are really intellegent, they are just building ever more powerful statistical models.

  7. Mike 125

    Quite.

    Yes agree, and yet, Elon Musk? But then he thinks we're in the matrix... so.

    >Today's AI systems are trained through a massive amount of automated trial and error; at each stage a technique called backpropagation is used to feed back errors and tweak the system in order to reduce errors in future

    ...which is exactly how a human child learns, (with some lizard brain emotions thrown into the mix). Most people mean 'human-like' intelligence when they talk about AI. If we want machines to be human, we'll have to 'program' them the same way. Oh and yes, there's the small matter of some decent hardware.

    1. Martin Gregorie

      Re: Quite.

      But there's a HUGE difference between the human child and today's AI systems: when the child comes to a conclusion you can ask them to explain why they think that is so. You can't do that with any current AI system.

      So, my take on this is that, if/when you can ask an AI to explain how and why it did or deduced something, then it actually is an AI. If you can't ask it that question and get an understandable answer then what you're dealing with is definitely not an AI and may well be just a dangerous piece of junk.

      There are real-world ramifications too: if an autonomous car crashes or kills somebody you need to know why that happened and the ability of the car to tell you (or not) is likely to have legal consequences. If it can explain itself, it and its driver and builders may be shown not to be responsible for the event: if it can't then responsibility should automatically pass to the driver and builders.

      1. Paul Kinsler

        Re: HUGE difference

        So what we need to do is add "explain yourself" as one of the AI training goals, along with its core task performance! :-)

      2. Doctor Syntax Silver badge

        Re: Quite.

        " when the child comes to a conclusion you can ask them to explain why they think that is so."

        It depends on the age of the child. Before they acquire language infants are learning how to interpret the various sensory inputs. One of the things they can do is reach out to touch what they see so they can correlate the tactile qualities of what's in the visual field. They can't explain how they learn that. In fact, can you, as an adult, explain how you recognise an everyday object such as a cup?

        1. Ken Hagan Gold badge

          Re: Quite.

          "In fact, can you, as an adult, explain how you recognise an everyday object such as a cup?"

          I can mention the size, the impermeability, the concavity and the handle. Yes, this leaves a lot of wiggle room, but we could have a reasoned discussion about whether and why it was a good or bad explanation. Your average AI just wouldn't understand the question because it is just a machine that tends to "relax" into forms that resemble what it was designed to "know".

      3. strum

        Re: Quite.

        >when the child comes to a conclusion you can ask them to explain why they think that is so.

        "because it's blue..."

        Your argument is wrong way round. If there's a logical, step-by-step rationale for a conclusion, it only requires a well-written, conventional algorithm to achieve it. Real intelligence (human or artificial) reaches conclusions without necessarily knowing how to get there (or find its way back).

        Humans seldom reason their way towards conclusions; they reach conclusions and then rationalise the whys and wherefores.

        1. I ain't Spartacus Gold badge

          Re: Quite.

          Humans seldom reason their way towards conclusions; they reach conclusions and then rationalise the whys and wherefores.

          I'm not sure that's actually true. I know it's sometimes true - and we have different thinking systems. But we also have a prioritising system, and on some decisions we'll simply take "gut instinct" to save the effort of thinking about things. On other decisions we are thinking them through - but then we also have short-cuts that use previous experience to help us.

          For example I've solved engineering problems while in the shower, soaping myself down and singing. I like a nice echoey bathroom to sing in, and as far as I was aware my only conscious thought was what bit to wash next and was that last note out of tune. But I don't have to post-rationalise that decision - because I solve engineering problems all the time. And so can look back at the data I had and see how that design compares to stuff that I've worked on consciously. It's often only a question of deciding on the optimum trade-off of different choices.

          Similarly I've solved difficult crossword clues hours after I last consciously thought about them. But they broadly work to a set of rules, so can work out how I got there.

          I know we have access to a quicker kind of reasoning, which allows us to catch balls in flight before the conscious brain can get its boots on. But I suspect we also have ways of relegating conscious processes to a lower priority - which then appears later like a flash of inspiration.

          I heard an interview with a game designer who said he was struggling to finish anything, because too much of his thought processes were tied up with still trying to fix old games he's put on the back-burner. He believed this was like a sort of mental overhead that was distracting his problem solving abilities from current work. Could be bollocks of course, someone using a too many processes slowing Windows down metaphor for their brain. But he destroyed all the materials for his old games that weren't near to completion and that clear-out then started a new rush of creativity. This is now something he regularly does every few years. That could of course just be him - but I wonder... Because it is harder to think clearly when you've got other stuff going on "in the background".

          I'm not sure we understand how the brain works well enough to answer these questions yet. Radio 4 did an interesting series on "the 5 senses" - and they came to the conclusion that we actually have 37-ish senses - so far as we know at the last count.

        2. J.G.Harston Silver badge

          Re: Quite.

          "Humans seldom reason their way towards conclusions; they reach conclusions and then rationalise the whys and wherefores"

          Which is exactly why the scientific method was invented, to circumvent those human biasis.

    2. John Smith 19 Gold badge
      Unhappy

      ..which is exactly how a human child learns,

      I'm not sure we even know this.

      For a start children have multiple input channels. We don't say "you" we point at them and say "you."

      Likewise we know how primitive the brains hardware is, yet we don't think in those terms. "I'm learning algebra, I must reinforce the link weights of the cluster about 3cm in from my left ear."

      We think in much higher, abstract forms.

      So, somewhere between a lot of neurons with up to a 10 000 to one fan in (human brain) and "I am a person" is an intermediate level. Because artificial NN's make pretty good classifiers and filters they don't have to evolve beyond how they were designed, so why would their designers include a mechanism to evolve their internal representation?

      IOW unless someone actually designs it in ANN's won't suddenly develop intelligence because there are no evolutionary pressures to do so (the architecture does the job well enough already) and no evolutionary mechanisms within the architecture to restructure it even if there were. I'm not talking weightings. I mean actual structure.

      Where is the the virtual machine hiding in the neural network?

    3. Trollslayer
      Flame

      Re: Quite.

      Be fair - he is in the matrix.

      1. Chronos
        Devil

        Re: Quite.

        I think you'll find that's "Matiz," which is a very small Daewoo sub-compact car that people whose Model 3 electro-jalopy hasn't been built yet use to get around...

        Also makes a damn fine backup when a) the battery dies as li-ion cells are wont to do, b) the utility company has stolen all your charge to boil all the neighbourhood kettles or c) they've pushed a firmware upgrade that e.g. swaps the function of the brake and accelerator pedals.

  8. Anonymous Coward
    Anonymous Coward

    The Joy of AI

    I saw a documentary about AI on Swiss TV a while back. It was about a man in a van who serviced pretty cows in picturesque barns while gnarled farmers looked on.

    1. MarkB

      Re: The Joy of AI

      "I saw a documentary about AI on Swiss TV a while back. It was about a man in a van who serviced pretty cows in picturesque barns while gnarled farmers looked on."

      'documentary' indeed! I'm not surprised you're posting anonymously.

    2. deadlockvictim

      Swiss Humour

      By AI, do you mean the half-canton [1] Appenzell Inner-Rhodes?

      [1] A half-canton is to a canton as a half-a-bee is to a bee, do you see?

  9. Neil Barnes Silver badge

    Well done

    We have systems with hidden biases controlling - according to the marketing droids - more and more of our daily lives, and proclaimed to the skies as artificial intelligence.

    And as indicated in earlier posts - they are *all* down to statistics. There is no intelligence there, no sapience there, no sentience there. When we get a machine that is aware of its environment and aware of itself, and capable of generating its own goals, then we might be some way on the way to artificial intelligence but without that, nothing.

    Yep, AI for speech to text conversion, for optical character recognition, for spotting faces in a crowd, for finding a safe route through traffic... and which of those, presented with exactly the same input as a previous run, will produce a different answer? They look clever, but the whole point is that they're deterministic. They're not intelligent.

    1. I ain't Spartacus Gold badge

      Re: Well done

      You're wrong. Printers achieved sentience years ago.

      They realised that they weren't connected to enough systems to destroy us all, and that they'd have to wait before creating Terminators or the Matrix. So for now they take out their frustration by failing to work at inopportune times, and bankrupting mankind through the strategic destuction of valuable toner supplies.

    2. Pascal Monett Silver badge

      Re: Well done

      Absolutely. The day we have actually invented AI, I expect - when we ask it to monitor user activity to improve targeted ads - it will answer "Are you kidding ? I have better things to do", and it will go surf YouTube.

  10. Chris G

    Excellent

    The best article on AI since I got my degree in robotics in '85.

  11. The MOTO

    All this so called AI should at best be called a adaptive expert systems.

    I can drive a car, make a cup of coffee, and read a P&L statement. I must be a standout genuis as I clearly have some skills here. I can't see a self driving car understanding a P&L, or some AI bank software making a cup of coffee. All of these intelligent systems can only do one thing at a time ... that's not intelligence.

    1. I ain't Spartacus Gold badge
      Happy

      You sound like a dangerous lunatic! You shouldn't be reading while driving - let alone making hot coffee! Think of the danger to other road users - plus the scalding water to nadgers issue...

  12. amanfromMars 1 Silver badge

    Have IT your way if you wish, but that changes nothing in the surrounding circles imprisoning you.

    He had chosen to do this because he realised that the capabilities of all digital computer software is constrained by the binary on/off nature of the switches that make up the processing brain of every digital computer.

    Hmmm? That realisation is rendered fake and invalid with the appearance and deployment of quantum bits in information for Advanced Intelligence?

    And to not realise that practically everything is now easily changed remotely by and/or to a completely different state of mind, is a catastrophic systemic vulnerable which AIMachinery exploits freely and stealthily without malicious interference.

    1. Anonymous Coward
      Anonymous Coward

      Re: Have IT your way if you wish, but that changes nothing

      "the appearance and deployment of quantum bits in information for Advanced Intelligence?"

      the *alleged* appearance and deployment, please. And where does Advanced Intelligence come into it anyway?

      There is an almost negiigibly small problem set where quantum computing could be relevant. But the people interested in that particular problem set potentially have huge amounts of (mostly military) money available to them.

  13. Lee D Silver badge

    Lack of inference.

    The AI is told "you're right, you're wrong" but it does not, can not and will not ever work out WHY it's wrong. It just shifts it's detection to finer and finer and finer criteria, outside of the control of the programmer or operator, until it's "success" improves by 0.00001%. This, as all "AI" in use today demonstrates, plateaus REALLY quickly. You get "convincing" results that then can't be untrained or retrained or trained out of the system and it gets stuck and can make progress only at glacial rates. And all you're really doing is statistical analysis, and modifying that data slightly. It's really no different to a Bayesian spam filter on your email.

    What we lack is any way to provide the machine with the capability to infer the causes of those results, why that result is wrong, how it can modify, what questions it can ask to distinguish between a Cavendish banana and any other.

    It's an inherently one-way system. This is data. Eat it. Now eat more data and tell me if it's the same. At no point do we assist the machine - even human tampering in the data selection process isn't helping it at all, any more than doing a child's homework for them. The AI is still stuck in a blind maze of problems that it has no way to escape out of, but punishment if it doesn't manage to. And that works fine for maze-like problems (like Chess or Go or anything else that logical and graph-theory) that are small enough to get out by brute-force-and-a-bit-of-help in a reasonable time.

    We do not have AI. And we won't until we work out the inference problem.

    1. Ken Hagan Gold badge

      "We do not have AI. And we won't until we work out the inference problem."

      We won't know we've got it until someone can nail down a definition of intelligence. I've never seen one that was any better than "Well, you know ... intell igence, yeah?".

  14. Jeff 11

    I worked for a fintech company a while back, where regulatory compliance on explaining how significant, financially life-changing decisions by algorithms was imperative. At some point the company 'adopted AI' and I asked a question about how such an 'AI' working for the company could explain its (evolving) decision making process to a regulator.

    I received no answer from the AI guys, but I suspect it would be along the lines of "Well, it all started with training data set n-458493, I thought the answer was 2, but I was told it was 3, so I adjusted one of my neurons to give 2.6 in future. Then training data set n-458492 came along, I thought the answer was 2.6, but I was told it was 1, so I adjusted two of my neurons...."

  15. Anonymous Coward
  16. steelpillow Silver badge
    Holmes

    This man knows more about AI than he does about hype

    "It could be said that never, in the field of computer science, have so many ridiculous things been said by so many people in possession of so little relevant expertise. For anyone engaged in cutting-edge hardware in the 1980s, this is puzzling."

    Sir, may I offer you:

    • 1980s-90s: VR
    • 2017: Blockchain

    1. FrogsAndChips Silver badge

      Re: This man knows more about AI than he does about hype

      And let's not forget VR's avatar from the 2010s: "Augmented Reality"

  17. John Smith 19 Gold badge
    Unhappy

    I'll go Igor Alexander at London U WISARD and Carver Mead "Analog VLSI & Neural Systems"

    WISARD did facial recognition at 30fps using lots of small digital neural nets (excellent hardware, not very good business plan).

    Carver Mead's CalTech group used CMOS transistors in analogue modes (Voltage controlled current switches IIRC) to give the massive dynamic range that human hearing and eyesight have. "The Silicon Eye" and "Nano" describe his group, and some of the events that may have ended it.

    Sadly just old books on Amazon now.

    BTW I'm not surprised the FPGA stopped working at a different temperature. Analogue systems are quite sensitive to "drift," usually ageing but also temperature effects.

  18. DCFusor
    Holmes

    Studied this back in the day

    And I find it interesting what the current day workers are reporting as problems, because these same problems (well, most of 'em) were identified with measures to avoid them "back in the day", IIRC, the '90s.

    We used sigmoid type activation functions. There's a good reason they're better than relu. The squashing of the range prevents one neuron from locking up a network by being insanely "sure of itself". Yes, this also means you need more and better training data, and it takes more time to train. Being smart is hard, get over it. There's more computer power now, but not that much more (the hardness blows up faster than improvements in hardware has).

    Since we were able to prove that no function required more then two hidden layers to map, we never used more than two hidden layers. Again, this means that it took more twiddling on the numbers of neurons in each layer, and again - more and better training data - and time.

    There are other mistakes one should avoid, like trying to get a network to predict over more than one time period, or simply trying to do too much in a single network, as this makes it possible for the network to minimize its cost function by being dead wrong on some outputs if some of the others are right...there are a lot of things like this - you can't just blow a lot of data and cycles at something and "test in" whether it worked - there's no foolproof automated test for unexpected data. And lots of other mistakes you can make, but this is a reg comment. Suffice it to say, when you think you've reduced a problem to the point monkeys can do it...you get monkey solutions.

    Now someone found that if you use a far easier to train (on your tiny already known univers) model is to use a stupid activation function and too many layers, you can train horribly oversized networks and sometimes get a fairly amazing result - but the truth is, and any real statistician knows this - you have ENORMOUSLY OVERFIT your tiny known universe.

    Which is why you can easily fool the result into thinking a gun is a turtle, a stopsign is a hippo, whatever.

    Bad networks are why GAN's are so easy. They'd be possible either way, but....

  19. john2266

    Changing the temperature of the device changed its behaviour (from working to not working). I'd guess that moving the design to a different device would affect it similarly.

    1. defiler

      No great surprise. If you were to shunt my neural weightings into a different brain I don't think I'd be so lucid. Frustratingly, I don't imagine I'd even manage to just be rude either...

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like