back to article ARM chip OG Steve Furber: Turing missed the mark on human intelligence

"Brains are massively parallel. We each have just under 100 billion neurons inside our heads, all running at the same time. And they are hugely connected, with 1015 synapses connecting the neurons together. The way forward in computing is parallelism. There is no other option." Professor Steve Furber, one of the designers of …

Page:

  1. Anonymous Coward
    Anonymous Coward

    Marvellous

    That's the sort of content that keeps me coming back to the Reg - great science, great tech, lucidly and amusingly presented. Thank you.

    1. Korev Silver badge
      Pint

      Re: Marvellous

      Me too

      One for the author ->

      1. Will Godfrey Silver badge
        Pint

        Re: Marvellous

        ... and a crate for Steve Furber's team.

        1. Adrian 4
          Pint

          Re: Marvellous

          Partially-Ordered Event-Triggered Systems

          And a friday afternoon's worth for Andrew Brown.

    2. Stoke the atom furnaces

      Re: Marvellous

      ARM Chip OG (Original Gangsta).

      Who came up with such brilliantly succinct headline?

  2. Voland's right hand Silver badge

    Two parameters involved

    1. Compute elements. That we got solved - our compute vastly exceeds in capacity your average neuron. A single neuron is not even 4004 - it is less - just a few logic gates.

    2. Network density. That is something we have actually failed to figure out. "Creative Wiring" does not cut it. While the I/O bandwidth of a brain is nothing to shout about, the interconnect (especially in some birds) is orders of magnitude above anything we have come up with.

    We will not get anywhere near solving the AI problem (not even to mouse brain levels) until we crack that one. Sure, we can use neural nets and other AI style approaches to solve specific problems. Getting an mouse brain to function, however is outside of our abilities. Even if the mouse is not called Algernon.

    1. Anonymous Coward
      Coat

      Re: Two parameters involved

      If your mouse Algy has rhythm, he might be working on a different problem.

      1. defiler

        Re: Two parameters involved

        Everyone knows Rastamouse has rhythm.

  3. jake Silver badge

    So ...

    ... The folks currently making/spending massive bucks in AI/ML are either cluelessly following a trail to nowhere, or they are shysters?

    1. Anonymous Coward
      Anonymous Coward

      Re: So ...

      "The folks currently making/spending massive bucks in AI/ML are either cluelessly following a trail to nowhere, or they are shysters?"

      Neither. Given that we simply don't understand how biointellegnce works mainstream AI/ML development has focussed upon a statistics based approach that we know does work, at least to a degree. This statistics based approach though, isn't capable of self-learning and requires exhaustive training just to be able to solve each specific type of problem; it is never going to spontaneously deduce the existence of rice pudding and income tax. As Steve Furber points out, biointellegence doesn't work like that; "you could take a two-year-old human, show them one cat, and they'll recognise cats for the rest of their life."

      I also think that he makes a very good point when he highlights the energy and efficiency considerations; this goes far beyond the simple matter of 'jelly' vs. silicon and points to a completely different paradigm.

      1. DropBear
        WTF?

        Re: So ...

        Actually, I have enormous issues with how he highlights the energy and efficiency considerations. The current units we (they) use to simulate neurons have nothing in common with actual neurons, which are more akin to a simple logic gate. A bazillion of "cores" takes megawatts to run because each of them is incomparably more complex (and more active) than a neuron. Do neurons go "ping" billions of times per second? No? Well then. On the other hand, a bazillion of logic gates takes only watts to run - it's called "one single core". It's infinitely less interconnected internally than an equivalent number of neurons would be of course (and it's not wired for parallel processing), which is why that single core is not much use for AI; but to compare efficiency numbers like this is not even wrong. It's just fucking meaningless.

        1. Primus Secundus Tertius

          Re: So ...

          @DropBear

          I doubt that a neuron is 'just a simple logic gate'. Even a bacterial cell, without a nucleus, embodies feedback mechanisms without which it would not survive. Cells with a nucleus, including neurons, are much more complicated than bacteria.

          I therefore believe that until we understand the whole evolutionary history of cells and brains we will not properly understand how the brain works. Current AI will, I expect, produce useful machines and some lessons, but not that full understanding.

        2. Anonymous Coward
          Anonymous Coward

          Re: So ...

          Meaningless? I've really got to disagree here. Like I said above, this goes far beyond comparisons of hardware and wetware. The key thing is the amount of 'work' achieved within the energy budget.

          Sure, you may be able to power a lot of simple logic gates on a low energy budget but you won't be able to make them do very much with existing paradigms.

          It's basic and fundamental physics; you can't change anything without energy being involved, whether it's flipping a logic gate or firing a synapse and the bottom line is that our brains need a lot less energy to do the vast range of things that they do than the simplest silicon system.

        3. the spectacularly refined chap

          Re: So ...

          A bazillion of "cores" takes megawatts to run because each of them is incomparably more complex (and more active) than a neuron. Do neurons go "ping" billions of times per second? No? Well then.

          Well done on completely mission the very point he was making. Of course neurons don't operate at those kind of speeds, the whole point is that throughput is achieved via parallelism instead of one big core running at unimaginable speed. His observation was that this isn't what happens, so why are we proceeding on that basis?

          Furber is far smarter in this area than you or I will ever be. You do not arrive at some profound insight by calling him a dick, taking a tiny line of argument he uses, and then developing that line in the same way he himself proceeds.

          1. handleoclast

            Re: So ...

            His observation was that this isn't what happens, so why are we proceeding on that basis?

            Fundamental theorem of computing: any problem that can be solved using multiple CPUs can be solved by a single CPU using context switching. It's just slower.

            So yes, you can emulate millions of neurons with a single CPU, it's just slower. Even when the CPU is clocked in the Gigahertz it's still a lot slower than a million neurons, because the CPU has a lot of extra complexity to allow it to perform general computing problems. The architecture isn't optimized for emulating neurons, it's optimized for being able to handle many different types of problem programmatically. The same CPU could run a game, or let you browse the internet, or compile some code, or...

            Long ago I realized that neurons have some similarities to how overflow-rate multipliers were used in the early days of CNC machines. Not identical, but maybe enough to point the way to an optimized neuron emulator. Or maybe I'm talking bollocks, there. Then again, the earliest hardware emulations of neural nets used even simpler circuitry and managed primitive optical recognition. Plessey, I seem to recall, had something to do with that research.

            Anyway, using a CPU is almost certainly the wrong way to emulate neurons in bulk and at speed. The architecture is completely wrong. But it's very configurable, which is what you want at this stage of the game. If you came up with a better hardware architecture it would require custom chips at great expense, and when you put enough of them together you'd probably find your architecture had problems. CPUs let you research the problem enough that one day you'll be able to figure out what you want well enough to go to a completely different hardware architecture with a degree of confidence.

          2. DropBear

            Re: So ...

            "Well done on completely mission the very point he was making."

            I must return the compliment - way to miss my point too. I was commenting on power requirements alone, and as long as boffins are using entire cores as "units" instead something more or less equivalent to a logic gate - to which, based on out current limited knowledge a neuron is functionally most similar to, regardless of its inner complexity that keeps it alive - it's ludicrously pointless to even mention consumption side by side. No, we're not using parallel architecture now the way a brain does. We use stuff that does one single thing at a time, working very fast. Which is why it needs so much energy, especially if we go on to build huge clusters of them trying to mimic a brain. If we'd be using much more parallel but relatively SLOW stuff, akin to many-input gates, they would consume very little power even today, even if the resulting device would appear to process massive amounts of data quickly due to its parallel structure and the sheer number of "gates".

            TL;DR: neurons are as far as I know NOT ultra-fast oscillating units, which is the thing that makes electronics power-hungry. Any slow-switching electronics simulating whatever it is they actually do would similarly have a LOW power consumption, unlike the myriad of super-fast cores we build our brain simulators out of today. Comparing those abominations to a brain's power consumptions is still not even wrong.

      2. Anonymous Coward
        Anonymous Coward

        Re: Wheel vs Legs.

        Current Computers and AI, are like a car and wheels. A car goes from a-b with wheels. So it simulates a human, who goes from a-b with legs?

        Almost. It does the task, but in a different way. Thus current AI may have some of the aspects of a brain, or intelligence of a person, but not often and not completely.

        PS, as to energy use, some things can be changed without using much energy... it's just we are not very good at it artificially just yet, where as the most neurons can do a switch of potential energy efficiently very well. :D

      3. Long John Brass
        Terminator

        Re: So ...

        "you could take a two-year-old human, show them one cat, and they'll recognise cats for the rest of their life."

        Hmmm ... Humans have an awful lot of wiring in place straight off the bat thanks to evolution. So it not really a fair comparison to an AI that's starting from scratch. Whats the embedded cost in watts over a billions years?

  4. J I

    Ancient history

    For those who like their computer history, it's perhaps worth mentioning that Acorn did actually do a prototype tablet device, the NewsPAD, around 1996 as part of an EU project, but it never got futher than that:

    http://chrisacorns.computinghistory.org.uk/Computers/NC.html

    https://en.wikipedia.org/wiki/Acorn_Computers#NewsPad

    I tried one out at the time - it was a bit clunky, bit it did point the way to what we have today.

    1. Chris Evans

      Acorn NEWSPad Re: Ancient history

      I recall hearing from Acorn that when Larry Ellison visited them about the NC reference design they showed him the NEWSPad he was impressed and he wanted to take one back to the states, when they said they couldn't give him one he replied but I might buy the company! They then explained that they only had two prototypes.

    2. Anonymous Coward
      Anonymous Coward

      Re: Ancient history

      So that's Apple up creek without a paddle?

  5. Michael H.F. Wilkinson Silver badge
    Coat

    So basically, we need very many machines that go "ping"

    Sorry, couldn't resist. I'll get my coat. Mine's the one with the DVD of Monty Python's Meaning of Life in the pocket

  6. psyq

    Equivalent to the brain of...?

    "Put four chips on a board and you get 72 ARM cores, which equates to the brain of a pond snail. Put 48 chips on it and you get 864 cores, equivalent to the brain of a small insect."

    I am sorry, but no, until we have a satisfactory model of neural computation stating that XYZ ARM (or any other) cores is somehow equivalent to the brain of >any< living being is preposterous.

    Needless to say, at this moment we do not have such model, so the actually required compute power is still an unknown. Should we model networks, spikes, membrane dynamics, ionic channels, proteins, molecules...? What is the appropriate level of abstraction, if any? Nobody has yet found the answer so, no, bunch of CPU cores is not equivalent to biological anything.

    1. matjaggard

      Re: Equivalent to the brain of...?

      It was specified first that this was ONLY about numbers of ARM cores vs numbers of Neurons.

  7. Milton

    Suspect assumptions

    All in favour of the science and I'm sure there will be much to learn from these massively parallel endeavours.

    That said, there are at least two glaringly suspect assumptions here:

    1. That because the human brain works with a lot happening in parallel, a computer must do so to the same level. This ignores the fact that silicon and the qubits that will eventually arrive on the scene have matchless power and many strengths that the squishy grey jelly simply does not. One reason the brain works with such parallelism is because it cannot clock at, say, 5 GHz. Jelly cannot do it. Silicon can. Insofar as the brain's parallelism is a compensation for its many other weaknesses, it is unwise to become too obsessed with parallelism for its own sake. This runs the risk of learning the wrong lessons from the human brain and can easily become a blind alley.

    2. That the animal brain is something we should faithfully emulate ... but why? Animal brains are evolved, not designed, and include a great many of the errors, inefficiencies, redundancies and circuitously superfluous kludges that evolution produces because it does not and cannot think ahead. You wouldn't design a robo-giraffe with a wasted length of neural wiring its neck, as evolution caused to happen: you'd think ahead, *design*, and do it better. The human brain is shockingly easy to deceive and manipulate, constantly forgets and makes mistakes, is quite capable of holding beliefs contradicted by objective fact and rationality: what's the point of including all the weaknesses and bad stuff? Why try to replicate the human multiple-reinforced-connections way of storing memories (which gradually summarises, simplifies, erodes and sometimes completely fictionalises them) when technology can put ever-tinier terabytes of RAM and petabytes of storage in your hands, to be managed by software that will store far more data more accurately than a person ever could?

    If you do succeed in creating something with the processing power and *processing style* of a human brain, it will have to have emotions: fear, hunger and lust being near the top of the list, since they keep an organism alive and provide it with motivation. Without feeling, you have a computer, not a mind. Even assuming you can implement this in a non-organic substrate, and even assuming that this is not merely a software emulation of those feelings (therefore, still a computer), what do you do next? Answer: you're either a son of a bitch who's imprisoning an innocent child, or you spend the next 20 years getting stuck in an ethical thicket, because you've created a consciousness, something which probably ought to have freedom and citizenship and agency ... and the latter will be definition include the capacity to decide to do harm or good.

    In sum, attempting to build a truly human brain is probably impossible and almost certainly horribly unwise. Yes, by all means let's continue creating awesomely powerful computing devices, they may be our salvation. But where brain and mind is concerned, the ambition is in more than one way quite doomed.

    (And yes, I am purposely conflating brain and mind in this comment, which in this context is not necessarily a reductive fallacy.)

    1. Charles 9

      Re: Suspect assumptions

      Regarding (2), part of the reason for modeling a living brain, foibles and all, is to get a better understanding of how OUR brains work, of which concrete data is sparse at best. We can't model around something we don't understand yet; we could easily take a wrong turn.

  8. Anonymous Coward
    Anonymous Coward

    He's an interesting chap Steve. He was my first year tutor. Brain the size of a planet. Despite (this being 2009) being an expert in ARM, low-power networking and distributed device-based computing he'd never so much as used a smartphone as he "couldn't see the point".

    Made for an interesting conversation with a tutorial group of 19 year olds.

    1. Anonymous Coward
      Anonymous Coward

      'he'd never so much as used a smartphone as he "couldn't see the point".'

      If you're working in this sort of area then sometimes its better to be disconnected from the results of your work .... is it really a good idea to realize that your life work is to enable people to have a 24-7 conenction to facebook etc!

      I remember it hit me years ago when we were being asked to almost double the performance of the processor we were designing and when pressed on why this extra performance was needed the answer seemed to be "the customer wants to add 3-d shadows to the text on the on screen menus"

      1. Primus Secundus Tertius

        "the customer wants to add 3-d shadows to the text on the on screen menus"

        I guess that's the time to hand it over to the B team.

        1. defiler

          "I guess that's the time to hand it over to the B team."

          s/team/ark

    2. Korev Silver badge
      Boffin

      I met a very well known figure in machine learning recently who'd only just got his first smart phone (and barely knew how to use it) on the basis that Google and Apple use his technology so he ought to see it in use.

  9. Tom 7

    Missing 500 million years of structure.

    I think a major slowdown will be the simple fact that we are at the end of 500 million years of evolution. Our brain is not a neural net - its a shitload of them put together in a specific way with a considerable collection of initial conditions and connections that gives us a considerable headstart (sic) over a bunch of processors put together with what for now is guesswork.

    However now we have AI that can learn for itself and beat the best man made Go machine I think it could be interesting to watch development over the next few years as machines develop angst with no alcohol to fix it.

  10. Mage Silver badge
    Boffin

    72 ARM cores, which equates to the brain of a pond snail

    No, it doesn't.

    Maybe it's as many connections, but it's not at all like a brain.

    Even in 1970s we knew the "future" of computers is parallelism. Programming rather than hardware has been the problem.

    It's very interesting research and I hope the "real" work is more about how to program parallel systems than chasing unicorns.

  11. amanfromMars 1 Silver badge

    And the Applications for Way Forward Parallelism, Professor?

    The way forward in computing is parallelism. There is no other option. .... Professor Steven Furber

    Well, no other more intelligent option, Professor. And the results and entangling will be dazzling and quite supernaturally disruptive and disturbing to moribund petrified status quo systems administrations.

    amanfromMars Oct 17, 2017 2:45 PM ..... [1710171945] ....... following opportunities on http://www.zerohedge.com/news/2017-10-17/russia’s-crypto-ruble-just-changed-game ...... or thinking to create them?

    Putin is openly inviting investment capital into Russia that is legal and above board. Russia wants legitimate businesses to operate in Russia in whatever currency they like as long as that business is transparent.

    Here's a SMARTR Joint AIBusiness Venture, methinks worthy of Putin Presidential Consideration ..... A Safe Harbour for Russia Crypto-Rubles be their very own CyberIntelAIgent Network of Global Operating Devices Live Active BetaTesting with Future Augmented Virtual Reality Productions for NEUKlearer HyperRadioProActive Live Operational Virtual Environments. ....... Quite Alien Space Places.

    Is anyone able to Offer and Deliver More, Even Better or Different and Working in a Parallel Dimension ....... which we can from here deeper explore and further examine with simple complex searching questions looking at forthright answers for dynamic future secured solutions.

    And just to make sure that there is no misunderstanding, any and/or all of that is readily available to any and/or all who would recognise their Need and Desire its Advanced IntelAIgent Feeds/Seeds/Magical Sources. I wouldn't want any national to be thinking they are excluded.

    1. Anonymous Coward
      Anonymous Coward

      Re: Applications, Professor?

      Yet a simple human brain, just maybe any of which go in bulk markets for a Penny Per Pack, is incomparable with Nanometricons in its effectiveness in Consumption to Work. Each of them can generate a unique universe, which none of the known, and being in project, supercomputers, can do.

      But who needs such a cheap universe? Does Budding the Handles That Fit IT rise its Anything to be Valued/Estimated?

      Solutions, SomeTHInG from which SomeOnE anywhere could have repeated, implemented, gained any kind of profit. Just anything given to our senses, that can be extracted from any of 8 bn of universes - that's the only value they can produce for those into accounting books/Mankind/etc.

      A simple supercomputer makes this extractionist behavio(u)r taking much less effort. Or - gives 815 minutes to edit this post, while a grey banner above the postbox, placed once by some lucky universe-maker, tells that one has only 10 (-;

      1. amanfromMars 1 Silver badge

        Application, Professor? NEUKlearer HyperRadioProActive Silk Road Ways/Quantum Communications Waves

        Howdy, AC,

        Methinks, Go East to the Middle Kingdom/Republic of China, is where all the NeuReal Surreal Flash Work is most likely to be very highly valued and regarded nowadays, AC .......

        Meanwhile, foreign investors can benefit from strong government support in emerging tech industries like VR device R&D and manufacturing. ..... Newly encouraged industries. R&D and manufacturing of virtual reality (VR) and augmented reality (AR) devices .... China’s 2017 Foreign Investment Catalogue Opens Access to New Industries

        Especially whenever the West is so nobbled to server old systems propping up failing capital markets for corrupted vested interests and thus absolutely terrified of that which emerges in/from the future which they neither comprehend nor command and control.

  12. Philip Stott

    I can't help thinking that Steve Furber should have a chat with Jeff Hawkins of Palm & Numenta fame (which I also initially learnt from another excellent Reg article).

    Between them I reckon we could expect SkyNet to come online in short order.

  13. Paul 195

    The point Mr Furber makes about power consumption is a very good one, and gives us a very good clue about just how far away we are from emulating human intelligence. It's something for all those people who expect to merge with the singularity to think about. Even with your big heavy meat body attached, you are about a 100,000 more times energy efficient than today's best technology, even if we knew how to upload you. A thousand fold improvement would get your energy cost down to 20Kw, so Sizewell B would be able to power 63,0000 people, about 3/4 of the population of Basingstoke.

    1. Charles 9

      But if you give it (and physics) some additional thought, you begin to realize that perhaps the REAL real reason the brain is as "efficient" as we think it is because we're also overlooking the idea that the brain is a bodge job. IOW, it's full of shortcuts and assumptions. It's as simple as taking a very good look at how the brain interprets the signals from our eyes (which BTW is rather incomplete). Extrapolate from that and you begin to wonder just how many of these bodges are built into our brain.

      1. Anonymous Coward
        Anonymous Coward

        Look at structure...

        A "bodge job" collapses. Like a poorly built house.

        A "network" is sprawling, but you will find each of those knots a requirement to get from a-b efficiently without blocking the other.

        Just look at plants. While a garden is the opposite to a jungle, each individual plant will *always* go towards the light source for efficiency.

        Thus the assumption that the human brain is a "bodge job" may be because as a group it looks like a jungle, but on the neuron level etc it is super efficient. It uses "assumptions" only where required or where failure is not a problem (see blind spot of the eye image processing etc).

        1. Charles 9

          Re: Look at structure...

          No, a bodge (or kludge) simply means it's assembled haphazardly. Evolution tends to do that sometimes because it tends to be reactive. Has no meaning as to whether or not it actually works, just that it was designed on the spot (trust me, I've watched Scrapheap Challenge--now those were some bodge jobs; some just worked better than others). After all, not everything that comes out of evolution makes sense (like yawning).

          1. Paul 195

            Re: Look at structure...

            "haphazard" rather ignores just how efficient evolution is at engineering good structures. Those random mutations which create small improvements become part of the gene pool, and those which don't get lost. The process is one of continual iterative improvement with ruthless whittling of functionality that doesn't help you survive long enough to have offspring - and long enough to help your offspring survive tool.

            The fact is that we are nowhere near building machines which work as well as the thing you are describing as a "bodge". Good engineering is all about only building as much as you need; the information we throw away simply isn't needed most of the time. If we knew how the brain was so good at discarding the irrelevant to concentrate on the important, we might be able to build better machines.

            With lots of effort we can optimize machines to perform specialized tasks far better than we can, but we are still an incredibly long way from creating anything as adaptable and smart as a human. Or even a cat.

  14. davcefai

    Number of cores

    Not all of the brain is concerned with reasoning. A goodly portion is "engine management" of the body. Without entering the other arguments in this thread I would venture that, based on the author's calculations, they will end up with a "brain" bigger than a human's.

  15. DropBear

    "The way forward in computing is parallelism. There is no other option."

    I seriously doubt it. Parallelism is only good for "data flow" processing, which actually approximates humans acceptably - perceptions going in, actions going out, emotions rattling around inside. Given enough runtime, enough state might even accumulate inside for the occasional "I think therefore I am"; but as far as current general-purpose computing goes, it's incomparably better for anything rigorous and precise even now than any "parallelised" (or even our own, "state-of-the-art") brain will ever be. We're being beaten by any pocket calculator for that sort of thing. Yes, parallelism-powered AI is what you'll need for mollycoddling the apparently endlessly ageing first-world population. But it will be useless* as soon as you need a CAD package, or a VR simulation or, you know, serving up a webpage...

    * Bear in mind that in this context "parallelism" is typically understood as "a large number of interconnects between processing units, a large number of which being affected by any information diffusing through the system" and NOT "a large number of specialized processing units performing the same well-defined operation on many pieces of data simultaneously" the way we have in GPUs today.

  16. Anonymous Coward
    Anonymous Coward

    AI Getting Nowhere

    "It turns out human intelligence is not about that. We're still not quite sure what it is about. However, we do know the brain is formidably power efficient."

    So he is admitting that he, the rest of the AI community, and biologists, have given up on doing the actual basic research - the SCIENCE- needed to solve the problem, and are instead just throwing dead rats or memory, processors and interconnections at the problem until they find intelligence ( or the funding dries up ).

    1. Slx

      Re: AI Getting Nowhere

      Very few computers can run on a cheese sandwich.

      1. magickmark
        Thumb Up

        Very few computers can run on a cheese sandwich.

        Or a really hot cup of fresh tea

  17. Korev Silver badge
    Joke

    An easy problem to solve

    If only they used brain processors instead of ARM ones then they'd probably find it a lot easier...

  18. fluffybunnyuk

    its a nice idea unless like me you subscribe to the view that is the non-computability of conscious thought...

    1. David Nash Silver badge

      the non-computability of conscious thought...

      Well, we can't know whether that is right or not without doing the research.

      It doesn't help much to "subscribe to a view" without showing that it is either correct, or not.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like