back to article Mything the point: The AI renaissance is simply expensive hardware and PR thrown at an old idea

For the last few years the media has been awash with hyperbole about artificial intelligence (AI) and machine learning technologies. It could be said that never, in the field of computer science, have so many ridiculous things been said by so many people in possession of so little relevant expertise. For anyone engaged in …

  1. Denarius
    Pint

    at Last

    well put sir !

    1. Adrian 4

      Re: at Last

      tbf, el reg has been saying this for a while.

      I would love to hear a comment on the state of the game from an actual AI researcher, as opposed to a marketroid. And I do hope they're all grabbing all the funds they can while the bubble lasts.

      1. Alister

        Re: at Last

        I would love to hear a comment on the state of the game from an actual AI researcher

        Um: Andrew Fentem has worked in human-computer interaction research and hardware development for over 30 years

        Not good enough for you?

        1. Ian Johnston Silver badge

          Re: at Last

          Neither of these worthy things has anything much to do with AI.

        2. Anonymous Coward
          Joke

          Re: at Last

          I would love to hear a comment on the state of the game from an actual AI researcher

          Um: Andrew Fentem has worked in human-computer interaction research and hardware development for over 30 years

          Not good enough for you?

          No, you misunderstand. He wants to hear from an 'AI researcher' not a 'researcher in the field of AI'.

      2. LucreLout
        Trollface

        Re: at Last

        I would love to hear a comment on the state of the game from an actual AI researcher, as opposed to a marketroid.

        I assume a marketroid is like a marketDroid, only with a very small penis and lots of anger?

      3. Daniel von Asmuth
        Pint

        A lean, mean machine that nobody understands

        That would do nothing useful, but people might vote it into the White House.

      4. Badbot1999

        Re: at Last

        As an actual AI researcher who has actually met Adrian Thompson, albeit many years ago... I kinda agree with some of the sentiment of the article: the hype is out of control. But ML research *has* qualitatively improved since the 80s in ways that shouldn't be dismissed. The Bayesian years from mid 90s to 2006 ish radically improved the robustness of robotics. The deep learning era since 2006 has radically enhanced sensing and perception abilities leading to a host of ubiquitous applications. It's not just faster; we now understand the mathematical properties of stochastic gradient descent and why convolutional networks reduce the number of parameters to learn (and why this helps). We understand how gradients need to flow to enable deeper networks to train efficiently. New architectures such as generative adversarial networks are more robust. Deepmind in particular are genuinely pushing the envelope of ML agent based systems with architectures inspired by neuroscience. Yes, their capabilities are exaggerated and extrapolated, but it's unreasonable to claim that no progress is happening.

    2. big_D Silver badge

      Re: at Last

      Yes, I thought I was alone in thinking that the AI craze was old hat, just with more processing power.

      I remember being enthralled by Thompson's discovery at the time.

  2. Alister

    Thank you!

    Thank you for a reasoned, common sense article on the realities of AI.

    And thank you particularly for reminding me about Thompson's designer, I too remember reading about it in the 90s, and being fascinated that the circuit evolved to use properties of hysteresis and electromagnetism within the FPGA.

    It seems that this, and things like Aleksander's WISARD discrete neural nets are being ignored in favour of software based solutions, and yet they were, even in the 80s - 90s, achieving things that software based AI still struggles with.

  3. jpo234

    To quote Stalin: Quantity has a quality of its own.

    We just see that the old ideas with new hardware can indeed produce new results.

  4. Paul Johnston
    Pint

    Have another drink on me!

    Finally someone pointing out the Emperors Clothes are not all they are made out to be.

    Being at a University we see a lot of the Futurist stuff and not many people are willing to say "hang on!"

  5. Anonymous Coward
    Anonymous Coward

    Eliza.

    That is all.

    1. To Mars in Man Bras!
      Facepalm

      Or Sophie

      You almost beat me to the punch. I came here to pour cold sick on the idiotically twee stories that keep popping up telling us how AI bots "think" things or "want" stuff.

      Here's a classic example from Re-Tweet Central [formerly known as BBC News]:

      http://www.bbc.co.uk/newsbeat/article/42122742/sophia-the-robot-wants-a-baby-and-says-family-is-really-important

    2. Jim Andrakakis

      Obligatory reference:

      http://ars.userfriendly.org/cartoons/?id=20001012

    3. Uncle Slacky Silver badge

      Don't forget PARRY!

      https://en.wikipedia.org/wiki/PARRY

    4. Anonymous Coward
      Anonymous Coward

      > Eliza.

      > That is all.

      Tell me why you think that Eliza is all there is?

  6. Paul 25

    Nicely put.

    Reminded me of a line from Dr Hannah Fry about this, she described it as less a revolution in intellegence, and more a revolution in the scale of computational statistics, that none of these systems are really intellegent, they are just building ever more powerful statistical models.

  7. Mike 125

    Quite.

    Yes agree, and yet, Elon Musk? But then he thinks we're in the matrix... so.

    >Today's AI systems are trained through a massive amount of automated trial and error; at each stage a technique called backpropagation is used to feed back errors and tweak the system in order to reduce errors in future

    ...which is exactly how a human child learns, (with some lizard brain emotions thrown into the mix). Most people mean 'human-like' intelligence when they talk about AI. If we want machines to be human, we'll have to 'program' them the same way. Oh and yes, there's the small matter of some decent hardware.

    1. Martin Gregorie

      Re: Quite.

      But there's a HUGE difference between the human child and today's AI systems: when the child comes to a conclusion you can ask them to explain why they think that is so. You can't do that with any current AI system.

      So, my take on this is that, if/when you can ask an AI to explain how and why it did or deduced something, then it actually is an AI. If you can't ask it that question and get an understandable answer then what you're dealing with is definitely not an AI and may well be just a dangerous piece of junk.

      There are real-world ramifications too: if an autonomous car crashes or kills somebody you need to know why that happened and the ability of the car to tell you (or not) is likely to have legal consequences. If it can explain itself, it and its driver and builders may be shown not to be responsible for the event: if it can't then responsibility should automatically pass to the driver and builders.

      1. Paul Kinsler

        Re: HUGE difference

        So what we need to do is add "explain yourself" as one of the AI training goals, along with its core task performance! :-)

      2. Doctor Syntax Silver badge

        Re: Quite.

        " when the child comes to a conclusion you can ask them to explain why they think that is so."

        It depends on the age of the child. Before they acquire language infants are learning how to interpret the various sensory inputs. One of the things they can do is reach out to touch what they see so they can correlate the tactile qualities of what's in the visual field. They can't explain how they learn that. In fact, can you, as an adult, explain how you recognise an everyday object such as a cup?

        1. Ken Hagan Gold badge

          Re: Quite.

          "In fact, can you, as an adult, explain how you recognise an everyday object such as a cup?"

          I can mention the size, the impermeability, the concavity and the handle. Yes, this leaves a lot of wiggle room, but we could have a reasoned discussion about whether and why it was a good or bad explanation. Your average AI just wouldn't understand the question because it is just a machine that tends to "relax" into forms that resemble what it was designed to "know".

      3. strum

        Re: Quite.

        >when the child comes to a conclusion you can ask them to explain why they think that is so.

        "because it's blue..."

        Your argument is wrong way round. If there's a logical, step-by-step rationale for a conclusion, it only requires a well-written, conventional algorithm to achieve it. Real intelligence (human or artificial) reaches conclusions without necessarily knowing how to get there (or find its way back).

        Humans seldom reason their way towards conclusions; they reach conclusions and then rationalise the whys and wherefores.

        1. I ain't Spartacus Gold badge

          Re: Quite.

          Humans seldom reason their way towards conclusions; they reach conclusions and then rationalise the whys and wherefores.

          I'm not sure that's actually true. I know it's sometimes true - and we have different thinking systems. But we also have a prioritising system, and on some decisions we'll simply take "gut instinct" to save the effort of thinking about things. On other decisions we are thinking them through - but then we also have short-cuts that use previous experience to help us.

          For example I've solved engineering problems while in the shower, soaping myself down and singing. I like a nice echoey bathroom to sing in, and as far as I was aware my only conscious thought was what bit to wash next and was that last note out of tune. But I don't have to post-rationalise that decision - because I solve engineering problems all the time. And so can look back at the data I had and see how that design compares to stuff that I've worked on consciously. It's often only a question of deciding on the optimum trade-off of different choices.

          Similarly I've solved difficult crossword clues hours after I last consciously thought about them. But they broadly work to a set of rules, so can work out how I got there.

          I know we have access to a quicker kind of reasoning, which allows us to catch balls in flight before the conscious brain can get its boots on. But I suspect we also have ways of relegating conscious processes to a lower priority - which then appears later like a flash of inspiration.

          I heard an interview with a game designer who said he was struggling to finish anything, because too much of his thought processes were tied up with still trying to fix old games he's put on the back-burner. He believed this was like a sort of mental overhead that was distracting his problem solving abilities from current work. Could be bollocks of course, someone using a too many processes slowing Windows down metaphor for their brain. But he destroyed all the materials for his old games that weren't near to completion and that clear-out then started a new rush of creativity. This is now something he regularly does every few years. That could of course just be him - but I wonder... Because it is harder to think clearly when you've got other stuff going on "in the background".

          I'm not sure we understand how the brain works well enough to answer these questions yet. Radio 4 did an interesting series on "the 5 senses" - and they came to the conclusion that we actually have 37-ish senses - so far as we know at the last count.

        2. J.G.Harston Silver badge

          Re: Quite.

          "Humans seldom reason their way towards conclusions; they reach conclusions and then rationalise the whys and wherefores"

          Which is exactly why the scientific method was invented, to circumvent those human biasis.

    2. John Smith 19 Gold badge
      Unhappy

      ..which is exactly how a human child learns,

      I'm not sure we even know this.

      For a start children have multiple input channels. We don't say "you" we point at them and say "you."

      Likewise we know how primitive the brains hardware is, yet we don't think in those terms. "I'm learning algebra, I must reinforce the link weights of the cluster about 3cm in from my left ear."

      We think in much higher, abstract forms.

      So, somewhere between a lot of neurons with up to a 10 000 to one fan in (human brain) and "I am a person" is an intermediate level. Because artificial NN's make pretty good classifiers and filters they don't have to evolve beyond how they were designed, so why would their designers include a mechanism to evolve their internal representation?

      IOW unless someone actually designs it in ANN's won't suddenly develop intelligence because there are no evolutionary pressures to do so (the architecture does the job well enough already) and no evolutionary mechanisms within the architecture to restructure it even if there were. I'm not talking weightings. I mean actual structure.

      Where is the the virtual machine hiding in the neural network?

    3. Trollslayer
      Flame

      Re: Quite.

      Be fair - he is in the matrix.

      1. Chronos
        Devil

        Re: Quite.

        I think you'll find that's "Matiz," which is a very small Daewoo sub-compact car that people whose Model 3 electro-jalopy hasn't been built yet use to get around...

        Also makes a damn fine backup when a) the battery dies as li-ion cells are wont to do, b) the utility company has stolen all your charge to boil all the neighbourhood kettles or c) they've pushed a firmware upgrade that e.g. swaps the function of the brake and accelerator pedals.

  8. Anonymous Coward
    Anonymous Coward

    The Joy of AI

    I saw a documentary about AI on Swiss TV a while back. It was about a man in a van who serviced pretty cows in picturesque barns while gnarled farmers looked on.

    1. MarkB

      Re: The Joy of AI

      "I saw a documentary about AI on Swiss TV a while back. It was about a man in a van who serviced pretty cows in picturesque barns while gnarled farmers looked on."

      'documentary' indeed! I'm not surprised you're posting anonymously.

    2. deadlockvictim

      Swiss Humour

      By AI, do you mean the half-canton [1] Appenzell Inner-Rhodes?

      [1] A half-canton is to a canton as a half-a-bee is to a bee, do you see?

  9. Neil Barnes Silver badge

    Well done

    We have systems with hidden biases controlling - according to the marketing droids - more and more of our daily lives, and proclaimed to the skies as artificial intelligence.

    And as indicated in earlier posts - they are *all* down to statistics. There is no intelligence there, no sapience there, no sentience there. When we get a machine that is aware of its environment and aware of itself, and capable of generating its own goals, then we might be some way on the way to artificial intelligence but without that, nothing.

    Yep, AI for speech to text conversion, for optical character recognition, for spotting faces in a crowd, for finding a safe route through traffic... and which of those, presented with exactly the same input as a previous run, will produce a different answer? They look clever, but the whole point is that they're deterministic. They're not intelligent.

    1. I ain't Spartacus Gold badge

      Re: Well done

      You're wrong. Printers achieved sentience years ago.

      They realised that they weren't connected to enough systems to destroy us all, and that they'd have to wait before creating Terminators or the Matrix. So for now they take out their frustration by failing to work at inopportune times, and bankrupting mankind through the strategic destuction of valuable toner supplies.

    2. Pascal Monett Silver badge

      Re: Well done

      Absolutely. The day we have actually invented AI, I expect - when we ask it to monitor user activity to improve targeted ads - it will answer "Are you kidding ? I have better things to do", and it will go surf YouTube.

  10. Chris G

    Excellent

    The best article on AI since I got my degree in robotics in '85.

  11. The MOTO

    All this so called AI should at best be called a adaptive expert systems.

    I can drive a car, make a cup of coffee, and read a P&L statement. I must be a standout genuis as I clearly have some skills here. I can't see a self driving car understanding a P&L, or some AI bank software making a cup of coffee. All of these intelligent systems can only do one thing at a time ... that's not intelligence.

    1. I ain't Spartacus Gold badge
      Happy

      You sound like a dangerous lunatic! You shouldn't be reading while driving - let alone making hot coffee! Think of the danger to other road users - plus the scalding water to nadgers issue...

  12. amanfromMars 1 Silver badge

    Have IT your way if you wish, but that changes nothing in the surrounding circles imprisoning you.

    He had chosen to do this because he realised that the capabilities of all digital computer software is constrained by the binary on/off nature of the switches that make up the processing brain of every digital computer.

    Hmmm? That realisation is rendered fake and invalid with the appearance and deployment of quantum bits in information for Advanced Intelligence?

    And to not realise that practically everything is now easily changed remotely by and/or to a completely different state of mind, is a catastrophic systemic vulnerable which AIMachinery exploits freely and stealthily without malicious interference.

    1. Anonymous Coward
      Anonymous Coward

      Re: Have IT your way if you wish, but that changes nothing

      "the appearance and deployment of quantum bits in information for Advanced Intelligence?"

      the *alleged* appearance and deployment, please. And where does Advanced Intelligence come into it anyway?

      There is an almost negiigibly small problem set where quantum computing could be relevant. But the people interested in that particular problem set potentially have huge amounts of (mostly military) money available to them.

  13. Lee D Silver badge

    Lack of inference.

    The AI is told "you're right, you're wrong" but it does not, can not and will not ever work out WHY it's wrong. It just shifts it's detection to finer and finer and finer criteria, outside of the control of the programmer or operator, until it's "success" improves by 0.00001%. This, as all "AI" in use today demonstrates, plateaus REALLY quickly. You get "convincing" results that then can't be untrained or retrained or trained out of the system and it gets stuck and can make progress only at glacial rates. And all you're really doing is statistical analysis, and modifying that data slightly. It's really no different to a Bayesian spam filter on your email.

    What we lack is any way to provide the machine with the capability to infer the causes of those results, why that result is wrong, how it can modify, what questions it can ask to distinguish between a Cavendish banana and any other.

    It's an inherently one-way system. This is data. Eat it. Now eat more data and tell me if it's the same. At no point do we assist the machine - even human tampering in the data selection process isn't helping it at all, any more than doing a child's homework for them. The AI is still stuck in a blind maze of problems that it has no way to escape out of, but punishment if it doesn't manage to. And that works fine for maze-like problems (like Chess or Go or anything else that logical and graph-theory) that are small enough to get out by brute-force-and-a-bit-of-help in a reasonable time.

    We do not have AI. And we won't until we work out the inference problem.

    1. Ken Hagan Gold badge

      "We do not have AI. And we won't until we work out the inference problem."

      We won't know we've got it until someone can nail down a definition of intelligence. I've never seen one that was any better than "Well, you know ... intell igence, yeah?".

  14. Jeff 11

    I worked for a fintech company a while back, where regulatory compliance on explaining how significant, financially life-changing decisions by algorithms was imperative. At some point the company 'adopted AI' and I asked a question about how such an 'AI' working for the company could explain its (evolving) decision making process to a regulator.

    I received no answer from the AI guys, but I suspect it would be along the lines of "Well, it all started with training data set n-458493, I thought the answer was 2, but I was told it was 3, so I adjusted one of my neurons to give 2.6 in future. Then training data set n-458492 came along, I thought the answer was 2.6, but I was told it was 1, so I adjusted two of my neurons...."

  15. Anonymous Coward
  16. steelpillow Silver badge
    Holmes

    This man knows more about AI than he does about hype

    "It could be said that never, in the field of computer science, have so many ridiculous things been said by so many people in possession of so little relevant expertise. For anyone engaged in cutting-edge hardware in the 1980s, this is puzzling."

    Sir, may I offer you:

    • 1980s-90s: VR
    • 2017: Blockchain

    1. FrogsAndChips Silver badge

      Re: This man knows more about AI than he does about hype

      And let's not forget VR's avatar from the 2010s: "Augmented Reality"

  17. John Smith 19 Gold badge
    Unhappy

    I'll go Igor Alexander at London U WISARD and Carver Mead "Analog VLSI & Neural Systems"

    WISARD did facial recognition at 30fps using lots of small digital neural nets (excellent hardware, not very good business plan).

    Carver Mead's CalTech group used CMOS transistors in analogue modes (Voltage controlled current switches IIRC) to give the massive dynamic range that human hearing and eyesight have. "The Silicon Eye" and "Nano" describe his group, and some of the events that may have ended it.

    Sadly just old books on Amazon now.

    BTW I'm not surprised the FPGA stopped working at a different temperature. Analogue systems are quite sensitive to "drift," usually ageing but also temperature effects.

  18. DCFusor
    Holmes

    Studied this back in the day

    And I find it interesting what the current day workers are reporting as problems, because these same problems (well, most of 'em) were identified with measures to avoid them "back in the day", IIRC, the '90s.

    We used sigmoid type activation functions. There's a good reason they're better than relu. The squashing of the range prevents one neuron from locking up a network by being insanely "sure of itself". Yes, this also means you need more and better training data, and it takes more time to train. Being smart is hard, get over it. There's more computer power now, but not that much more (the hardness blows up faster than improvements in hardware has).

    Since we were able to prove that no function required more then two hidden layers to map, we never used more than two hidden layers. Again, this means that it took more twiddling on the numbers of neurons in each layer, and again - more and better training data - and time.

    There are other mistakes one should avoid, like trying to get a network to predict over more than one time period, or simply trying to do too much in a single network, as this makes it possible for the network to minimize its cost function by being dead wrong on some outputs if some of the others are right...there are a lot of things like this - you can't just blow a lot of data and cycles at something and "test in" whether it worked - there's no foolproof automated test for unexpected data. And lots of other mistakes you can make, but this is a reg comment. Suffice it to say, when you think you've reduced a problem to the point monkeys can do it...you get monkey solutions.

    Now someone found that if you use a far easier to train (on your tiny already known univers) model is to use a stupid activation function and too many layers, you can train horribly oversized networks and sometimes get a fairly amazing result - but the truth is, and any real statistician knows this - you have ENORMOUSLY OVERFIT your tiny known universe.

    Which is why you can easily fool the result into thinking a gun is a turtle, a stopsign is a hippo, whatever.

    Bad networks are why GAN's are so easy. They'd be possible either way, but....

  19. john2266

    Changing the temperature of the device changed its behaviour (from working to not working). I'd guess that moving the design to a different device would affect it similarly.

    1. defiler

      No great surprise. If you were to shunt my neural weightings into a different brain I don't think I'd be so lucid. Frustratingly, I don't imagine I'd even manage to just be rude either...

    2. MarkB

      "Changing the temperature of the device changed its behaviour (from working to not working). I'd guess that moving the design to a different device would affect it similarly."

      I seem to remember reading about this. If I recall correctly, simply reproducing the design "identically" (bear in mind this was actual physical circuitry) would change its behaviour - the behaviour related to a specific assembly of a specific set of components.

  20. Destroy All Monsters Silver badge
    Headmaster

    Naysayer sauce but where is the beef?

    In other words, there has not been any significant conceptual progress in AI for more than 30 years.

    I'm sorry but that is completely wrong.

    If nothing else, we have learned that a lot of problems can be semi-successfully attacked by function fitting. and theoretical work is advancing.

    Judea Pearl has advanced the causal revolution like a bulldozer, electrifying the field that dogmatic statisticians have kept sterile since the early 1900s.

    Rodney Brooks has been working rather well on advancing robotics.

    30 years includes work by Hofstaedter too, so GOFAI has had its advances.

    1. Pascal Monett Silver badge

      Um, since when has robotics had anything to do with AI outside of Asimov's books and Star Trek ?

      Is AI supposed to be some sort of tailor, able to function fit at a whim ?

      And I'm very glad that you are so excited about statistics, but that has nothing to do with AI. We have very capable, specialized statistical analysis machines today, that I do not dispute, but we most definitely do not have AI nor are we any closer to getting to it. Especially not with statistics.

      If you don't agree, tell me just how much data do you analyze in the morning before turning on your coffee maker.

  21. Big_Boomer Silver badge

    Expert Systems

    I've been saying for years that these are Expert Systems, not Artificial Intelligence. AI as a term applied to 99.99% of everything over the last few years is akin to calling a Horse and Cart a StarShip. I have no doubt that we will eventually get AI, the same as I believe we will one day travel to the stars (if we survive), but we ain't even close yet.

    1. David Hall 1

      Re: Expert Systems

      AI is the complete domain. It includes expert systems. Machine learning is a distinct domain (along with others like genetic algorithms) alongside expert systems.

      Deep learning is a subset of machine learning.

      No one disputes this to be true. If you want to learn more about the machinations that the AI community had thinking about whether an expert system could really be 'sentient' (which doesn't equal AI) - it's worth looking up the Chinese room though example.

  22. JLV
    Thumb Up

    Good article, need more

    Very well-said. Pattern recognition and trained systems _are_ impressive. Back in the 90s the little we heard about AI had a lot to say how hard it was conceptually to design a computer to recognize objects* (granted, more T-72s & 80s than cats). We seem to be making excellent headway there - and, again this is a significant field. In many ways it has more real world relevance than most of the fledgling real AI work to date.

    And the Go-winning computer did have some fairly novel takes on strategy IIRC.

    But is not making thinking computers, let alone self-aware ones. That's a subject that is both the perennial 20 years away, always, and not talked about much these days.

    I'd be curious to know how much this new found fervour for "fake" AI has impacted funding and researcher uptake on actual AI - the kind where a computer could reliably infer and respond to a new question like "is there water in the fridge?" without scanning millions of dialog transcripts to guess or being an advance-built kitchen/appliance narrow specialist. What's that field up to?

    I'll guess it's meant massive brain and $ drain but would welcome a future article.

    p.s. old book, but 'Blondie 24' is an accessible and entertaining pop sci take on neural nets. author site: http://www.davidfogel.com/ No affiliations on my part.

    p.p.s. re the black-box, can't-explain objection to neural net AI, that's a valid point, but there's no guarantee a "more worthy, traditional" AI system that isn't based on humans pre-entering rules would automatically be transparent by nature.

  23. herman

    Ayup, artificial intelligence and quantum computing may be usable soon after fusion power is realized.

  24. herman

    Thinking Machines

    I can still remember Thinking Machines and their marvelous 1024 core computer.

  25. a_yank_lurker

    Bravo

    The only intelligence in AI is human intelligence to write a very complex algorithm. However there is an adage about curve fitting: "With enough variables you fit an elephant". So I wonder if 'AI' systems are fitting so many variables that with enough 'training' they can be made to say anything you want.

    1. A-nonCoward

      Re: Bravo

      "With enough variables you fit an elephant".

      in the 1500s we called those "epicycles".

      Yup, particle physics: if something behaves differently than predicted, there "must" be a new particle. Circles on circles, seen that.

  26. phillipao

    I appreciate the debunking of hype, but there has been a genuine breakthrough in machine learning that has led to amazing results in image recognition, speech recognition, OCR, and translation.

    Geoff Hinton discovered that neural nets could be trained in a greedy, layer-wise fashion (https://www.cs.toronto.edu/~hinton/absps/fastnc.pdf). Neural nets were previously limited to two or three layers, and now they can be deeper. This moved some problems from the realm of "too hard" to merely "hard".

    There's a lot of noise and hype and businesses trying to sell crap marketed as "AI". But behind all that, there is something truly amazing. I don't normally comment, but I felt like this piece missed that, and I think it's a shame. The positive thing behind all of this hype is so much more fun than complaining about the hype.

    1. diodesign (Written by Reg staff) Silver badge

      "there is something truly amazing"

      FWIW it is an opinion piece by Andrew.F. Think of it as an antidote to all the hype.

      While there is a hell of a lot of nonsense around AI at the moment, there are some interesting, and some rather crap, research projects and products, which we write about on a daily basis.

      C.

      1. Anonymous Coward
        Anonymous Coward

        Re: "there is something truly amazing"

        "there is a hell of a lot of nonsense around AI at the moment, there are some interesting, and some rather crap, research projects and products, which we write about on a daily basis."

        So?

        El Reg used to cover Captain Cyborg, once or maybe even twice upon a time.

        Where did that lead to?

        1. CRConrad

          Captain Cyborg

          Those articles — far more than two, AFAICR — about prof. Reading from Warwick University (or was it prof Warwick from Reading University?) certainly lead to a lot of readers being thoroughly entertained, which is no bad thing.

          And what this article leads to is, hopefully, that just as many readers will realise that the current state of the art in “AI” is at about the same level of “cyborg” that he represented. How is that not a good thing?

    2. juice

      > I appreciate the debunking of hype, but there has been a genuine breakthrough in machine learning that has led to amazing results in image recognition, speech recognition, OCR, and translation.

      I fully agree that there's been some amazing results in various fields, as a result of improvements in machine learning.

      But the point of the article is that this machine learning isn't generating "intelligence": it's generating a tool (aka expert system) which is very good at a single job presented in a specific way. Change any of the parameters and the tool will shatter.

      Personally, I think it's highly debatable as to whether the current approach to neural networks is ever going to be able to produce something more than a tool.

      But in the meantime, there's a lot of money to be made from promising the earth, much as happened during the last few "Next Big Thing" waves (3D displays and VR, blockchain, etc).

      Some people will get rich, some early adopters and angel investors will lose out and a few interesting things of actual use will emerge. And then the cycle will begin anew!

  27. rocklands.cave

    Excellent - liked the "Or haven't yet."

    ...and it's that line that drives the current madness.

  28. kneedragon

    Yes, quite so. Technical people who are passionately interested in this stuff, often make some extremely troubling statements about it. Muggles, as one would expect, completely fail to get it.

    Artificial Intelligence, as we have it today, is not really intelligence. There's not a soul or a consciousness in there. There's a machine, a computer system and program (s) that do some things one would associate with neural networks, and some to do with fuzzy-logic and some that have to do with dumb statistics.

    Dumb statistics... Go buy a book, that teaches you how to play poker, and one major subject will be the odds of getting a pair, or 3 of a kind, or a flush, or a house full. Armed with a little bit of base cunning, and a good working understanding of the betting system, and a splash of cunning human psychology, AND the knowledge of how likely it is anybody else at a 4 hand table, where one has folded, and you hold a king high straight... has a hand that can beat yours...

    Dumb statistics don't make you intelligent, but they do provide you with enough help to make decisions, and probably better decisions than most of the roobs you'll be playing against.

    A system like this can be trained to play a complex game like Go, or Chess, better than any human player alive. But that doesn't make it intelligent. It can and it does make an extremely useful tool, which can maybe find better choices and strategies than you most of the time.

    But that expertise doesn't tell you what to do if you spot a round metallic hole up the sleeve of the player opposite you. "How to win at Poker" doesn't explain Derringers. It doesn't know the house can get a "maid" in to do some housekeeping while you're up, and have her bend over and show you she isn't wearing any.... which is prone to cause some degree of distraction....

    Artificial Stupidity doesn't "Understand" anything, the way a human does. It doesn't think.

    It can be incredibly good at what it does, but it's nothing like "intelligence" in the way we think of it.

    Now that's not to say it's no good. It is good, but it has limits. Like fire, you have to learn to use it. Like a magnet dangling from a thread, it can tell you something you otherwise wouldn't have known... But the fact you've discovered the Compass, doesn't mean you're God, and doesn't mean it's God either.... It is a very useful trick. It does make navigation easier. But, you've still got to row, and bale, and try not to capsize..... It's not the hand of God and it's not the answer to all your problems. It's a tool...

    1. JLV

      >A system like this can be trained to play a complex game like Go, or Chess,

      good point, but to go further the current state of the art is a computer can’t be taught Go and Chess together (other than doing it twice). it really is very single-purpose, there is _no_ generalization involved in the actual learning.

  29. Anonymous Coward
    Anonymous Coward

    I agree

    I have long disagreed with all these predictions of machines being self aware. We do not yet understand what consciousness is. There is an assumption that its an emergent property that appears once there is a certain level of complexity, but we have abolutely no evidence of this.

    There is a HUGE difference between a self learning system and a self aware system. We have made incredibly clever and sophisticated systems but all the fears of Musk and Hawkins etc that current AI is on a path where some kind of super consciousness will emerge is bizarre. None of our neural networks have anything that even hints at any level of consciousness. Even cats and dogs and mice and creatures that are largely instinctual still have a level of consciousness that we can only emulate - we can’t reproduce it.

  30. Andy 73 Silver badge

    Well..

    One skill Hassabis certainly seems to have is the ability to be tremendously excited by the work he's doing, and to communicate that excitement to people who might pay him. He's had quite a long history of building emergent systems of various sorts, and promising that each will deliver a unique experience. I'm sure there are interesting and novel components to his work, but casual inspection usually results in devs going "Ah, so he's doing <X>", where <X> is a fairly well known technique, being applied to ever larger data sets.

    This seems to be a common theme in AI research, where researchers posit that if the data set is big enough, eventually we'll get something new. It's unfortunate to confuse that with the less impressive flashes of insight into particular systems that fill most press releases. "We discovered that <doing something counter-intuitive> results in <some desired outcome>" sounds like a great leap has been made in understanding, whereas it's usually just the case that dispassionate data analysis has revealed unusual correlations.

  31. Michael Wojcik Silver badge

    Sigh

    Here we have an author who displays a limited understanding of the subject, and an axe (hardware innovation) to grind. Of course the Reg Commentariat, which tends to curmudgeonliness and cynicism (not bad qualities) agrees.

    Yes, popular accounts of ML research are overstated, oversimplified, and often incorrect. Yes, they employ vague and misleading metaphors. In what area of research is this not true?

    And if you're going to object that ML algorithms don't display "creativity" or other anthropomorphized features, I'd like to see an applicable definition of those features. While we're at it, can Fentem or any of his commentard supporters explain how the human mind is anything other than the effect of a mechanism, and thus anything that is qualitatively different from a computable function?1

    And a more specific objection: Fentem's claim "In other words, there has not been any significant conceptual progress in AI for more than 30 years" is bullshit, for any useful definition of "AI". Even a cursory review of several of the many papers on ML in, say, Adrian Colyer's blog shows that there has in fact been considerable conceptual progress in the field. Also, not all ML systems - not even all NN systems - use backpropagation; see the extensive research on gradient-free optimization of ML systems.

    I do think that ML is in a bubble right now; that it's oversold and over-applied; that research is already demonstrating that our ML systems are much more fragile and considerably less compelling than they might seem when you just look at the shiny demos. But to claim that the field hasn't advanced in 30 years, or (more egregiously) that only some sort of vague, handwaved hardware approach can produce true innovation, is rubbish.

    1 And no, I'm not buying Penrose's appeal to quantum effects. As Thomas Metzinger put it, "For middle-sized objects at 37°C, like the human brain and the human body, determinism is obviously true". I might consider "obviously" a bit strong, but the burden of proof falls to the non-determinists.

  32. A-nonCoward
    Alien

    AI proves Creationism

    If there really is any "creativity" here, it is the creativity of DeepMind researchers who devise and manage the processes that train the systems.

    Yup, my point exactly. There has to be an Architect (or a pony) somewhere

  33. A-nonCoward

    Hear hear!

    I was into AI as part of a dot-com in the '90s.

    Every time I have a conversation with current devotees, it sounds like the same stuff, nothing really new. Actually, we had better, ginormously more efficient code, these kids nowadays cannot do anything low level, it seems. Makes me think on how making a cup of tea can kill the biggest processor ever - just make enough layers of interpretated code, voilà.

  34. LogicMOO

    Great article Andrew.

    In the early 90s I took backpropagation to some of the limits we see today .. And discovered that I need to do something new and different. (Actually, I had to go back the 1970s and get "caught up!" ) Even now it has taken me over 30 years since working thru cognitive theory after theory (while working for companies on DoD funded AI projects) to find the right mix of sauces. Which I have now worked independently for the last 6 years. And just now only 80% confident that I maybe know what needs to be done to not waste mine and others resources, Now I am at a conundrum. I can go to work for a company (two have offered) that is willing to fund my research but I have to give up rights to my work AND basically working alone?!. What is the point right?

    I think if more people understood your article that it is actually true. Maybe we could find opensource funding. Or maybe we could move on to where researchers like us whom maxed out deep learning and started to create the next great thing can continue to develop!

  35. David Hall 1

    Where's Kevin?

    Surely no article like this is replete without a quote from AI's black sheep Keven Warwick!

  36. Anonymous Coward
    Anonymous Coward

    Personal Assistant

    For decades products have been marketed as intelligent.

    Usually uses an example of booking a flight automatically based on a diary entry.

    When Serge Brin sacks his human Personal assistants I will start to take this seriously.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like