back to article 'CAPTAIN CYBORG': The wild-eyed prof behind 'machines have become human' claims

For 14 years, The Register has been chronicling the publicity stunts of Kevin Warwick, an attention-seeking academic with a sideline in self-mutilation*. In fact, Warwick has been making improbable claims to the press for much longer than that: over twenty years. But the world has continued to relay Warwick's stunts and …

COMMENTS

This topic is closed for new posts.

Page:

  1. i like crisps
    Go

    "The Guardian has performed a Reverse Ferret"

    Even Tom Daley can't perform that one!

  2. Anonymous Coward
    Anonymous Coward

    So much to do, so little time...

    What strikes me odd about AI, is that we really do not understand how our minds or brains function and yet we persevere in the hope that someone is going to progamme a binary computer which is has the same functionality as the mass of grey cells in our head. Ok, I'll relax that criteria and say "mimic".. but its still a tall order...

    Yes we can programme a machine to play chess (the moves are known, the number of possible moves and the consequences of those moves is lost on all but the best chest masters, hence why a machine can play and win), and we can programme certain solutions in which the problem space comprises a set of rules and objects to which those rules are applied.

    1. Lee D Silver badge

      Re: So much to do, so little time...

      Excusing the "chest masters"...

      Worse than that... we can't even decide what intelligence is. The Turing Test is not really well-defined and certainly isn't a cover-all explanation of when we hit "AI". I would reject it out of hand as having achieved nothing more than this article points out - deception by one human of another using a machine as the tool of that deception. That's not really "intelligence" on the behalf of the computer, only a display of intelligence, or lack of it, by the humans.

      All AI that I've ever seen is highly dubious and amounts to nothing more than basic heuristics, or gibberish. Although I can happily claim that humans give the latter all the time, we tend to perceive those humans as "non-human"... a guy telling you what his gerbil had to say about a certain place is the kind of guy you'd move down the bus away from. And the former do not describe humans at all - in fact, there are a vanishingly small number of "rules" that any particular instance of a human will follow blindly every time. And en-masse, we're even more dumb.

      I hate the focus on "AI". We don't have it. We don't have anything on the horizon near it. And we've been doing and saying the same things for nearly a hundred years, if not more. We're not even close. We can do interesting things, we can automate machines to a high degree and we can have them do some wonderfully difficult work a lot quicker than we ever could (e.g. in the computer vision areas, etc.), but there's not a peep, not a glimpse, not a sign, of any real "intelligence" that I've ever seen.

      Which makes me wonder, sometimes, if there's more than just a "complexity" problem to be solved to approach AI... whether there's some inherent physical process or characteristic that inserts enough "illogical" or random elements into the physics to make everything just that bit more capable of breaking from logic and rules and into decisions and intelligence. I'd place my bets on something quantum, personally.

      And the bit that really annoys me? It takes a human baby several years of constant, intense, personally-focused training to get close to mastering the baby-end of intelligence. Yet every AI project I've seen tends to be a year or two old at the most - usually just long enough to write a paper, get your doctorate and then flee before someone asks you to do any more on it. And the computer systems we have don't even approach the complexity of the human brain, nor it's genetic "bootstrap" headstart on being successful at forming intelligence quickly from a blank slate.

      Start an AI project that is intended to run continuously and last 100 years, using the most powerful hardware available, which we train constantly in the same intense amounts of data and detail as we could a baby. Then we might approach something akin to a three-year-old. It's no wonder we've got nowhere with it so far.

      1. phil dude
        Boffin

        Re: So much to do, so little time...

        a nice observation, that is pretty much a textbook example of the elephant in the room!!

        Studies on infant humans and chimpanzees (pan troglydytes) and sign language interpretation, show that the language centre of the human brain is pre-constructed.

        So you need the billion years of evolution, and then you can leave it watching 1970's open university programs....

        P.

        1. Michael Wojcik Silver badge

          Re: So much to do, so little time...

          Studies on infant humans and chimpanzees (pan troglydytes) and sign language interpretation, show that the language centre of the human brain is pre-constructed.

          That is so wildly over-simplified it's nearly meaningless. It's also irrelevant, unless you think that "pre-construction" involves some magical, non-mechanical operation.

          The real problems with Natural Language Processing have nothing to do with the innate language capabilities of primates, or the lack thereof in machines; innate capabilities can be black-box modeled. The problems that occupy NLP research are the many extremely complex processes involved in natural language use, such as inferring elided predicates and conversational entailment, coupled with the vast amounts of world and domain knowledge that humans bring to every incidence of parole. These are quantitatively difficult problems with many qualitative unknowns, but they are not ineffable.

          In any case, serious contemporary AI research has little to do with the popular mischaracterization of AI, and little to do with this Turing-Test-in-practice rubbish.

      2. i like crisps
        Trollface

        Re: So much to do, so little time...

        "Chest Masters", was that before or after the "Bullworker"...sounds very 70's

      3. as2003

        @ Lee D

        I'm not quite sure what your point is here. You just seem to be indiscriminately pouring scorn on all aspects of the AI field.

        Sure, it's proved to be a lot more difficult that anyone expected; it may not even be possible! But what would you have us do? Just give up?

        Your bit about "every AI project I've seen tends to be a year or two old at the most - usually just long enough to write a paper, get your doctorate and then flee before someone asks you to do any more on it", is grossly disingenuous. You seem to be implying that the sum total of activity in the field of AI amounts to a handful of pre-doc students taking random pot-shots at the problem?!

      4. wobblycogs

        Re: So much to do, so little time...

        I couldn't agree more but I think you, and most AI researchers, are under estimating just how much "programming" humans start life with. Obviously I have no hard evidence, no one does, but watching two young children grow up from birth to four I'd say that evolution has pre-programmed us a lot more than people suspect.

        Drawing an analogy between humans and computers I'd say the human POST is about 3 to 3.5 years in duration. The human BIOS provides a basic framework to acquire language, emotions, movement, self preservation etc. The parents role is to install higher level programs e.g. one or more spoken languages, don't touch that particular furry caterpillar etc.

        If you look at this from an evolutionary point of view coming out partially pre-programmed makes a lot of sense as it's so much quicker than having to teach everything from scratch every time. That doesn't mean every bit of pre-programming has to become active immediately at birth though, it switches on at the appropriate developmental points.

        As I say this is just a suspicion but I really think we need to move away from the idea that we are a blank slate at birth.

        1. John Brown (no body) Silver badge

          Re: So much to do, so little time...

          "As I say this is just a suspicion but I really think we need to move away from the idea that we are a blank slate at birth."

          It's worth remembering that many other creatures are born and are up and running within minutes. Humans seem to be the exception to that.

          It may be that other creatures have so much more inbuilt pre-programming that there's less space left for new programming. Maybe we are more intelligent because we are born with less pre-programming and more space for learning.

          We don't have the largest brains on the planet but maybe that larger brain + little pre-programming = adaptable intelligence.

      5. JeffUK

        Re: So much to do, so little time...

        There's an argument that one of the IBM Chess playing supercomputers passed the Turing test. not because it beat Kasparov, but because he was convinced that one particular move had direct human intervention.

      6. Michael Wojcik Silver badge

        Re: So much to do, so little time...

        The Turing Test is not really well-defined and certainly isn't a cover-all explanation of when we hit "AI".

        Hardly surprising, since it was never intended to be anything of the sort.

        I would reject it out of hand as having achieved nothing more than this article points out - deception by one human of another using a machine as the tool of that deception.

        Perhaps it would be more useful to learn what the Turing Test is for, rather than embarking on some middlebrow rant.

        The TT is a philosophical gedankenexperiment intended to illuminate a position on the epistemology of thought - how we can know whether a machine is a thinking creature (Heideggerian Dasein, more or less, in one conception). Turing proposed it not as a practical exercise but as an epistemological line in the sand: if we can't find a decision procedure based only on the direct evidence of thought1 to distinguish between humans and these hypothetical machines, then, he says, we have no grounds for considering the machines as non-thinking entities.

        Turing's position, interestingly, is more closely allied to American (US) pragmatism, which basically disavows metaphysical epistemology in favor of an exclusive reliance on measurable properties, than with the philosophical schools dominant in the UK at the time. Conversely, the US philosopher John Searle's famous attack on one form of AI2, the Chinese Room gedankenexperiment, is more closely allied to English logical positivism: what do we mean when we use the word "thinking"?

        Robert French, in a very good piece in CACM, has pointed out why these Turing Test competitions may be interesting for people developing NLP systems and the like, but have little or nothing to do with AI. And the same can be said of the Test itself, except as a philosophical stake in the ground.

        1Which Turing in effect argues is blind interpersonal discourse.

        2Namely what Searle called "symbolic manipulation". It's worth noting that Searle believed in the strong AI project, in the sense that he thought the mind was an effect of a mechanical substrate, and thus in principle could be reproduced by a machine, and in fact he claimed in print that he expected some day it would be. He just thought the strong-AI researchers of his day were using ridiculously oversimplified models and approaches. History seems to agree.

      7. Goat Jam

        Re: So much to do, so little time...

        "Start an AI project that is intended to run continuously and last 100 years, using the most powerful hardware available"

        Great idea! We could call it "Deep Thought" and it could tell us the answer to The Great Question.

        Might need more than a 100 years to run tho . .

    2. Primus Secundus Tertius

      Re: So much to do, so little time...

      Surely the point of the research is to find out what kind of logic - logic on a large scale - is needed to produce something looking like intelligence. So, for example, neurons might be modelled at a logical level but not at the biological level.

      Of course, it is possible to suggest that there is something rational in a biological cell, i.e. not a 'soul', which is not currently known but enables a large group of them to show intelligence. After all, the eucaryotic cell is immensely complex. But this is conjecture, and arguably contravenes Occam's Razor. However, what a discovery if it turned out to be true.

      All fascinating stuff, though. If done properly, it is pure science.

      1. Don Jefe

        Re: So much to do, so little time... @Primus

        Logic and intelligence are wildly different things. Extrapolating one from the other can result in one of only two possibilities: An artificial Rainman or an artificial Vulcan, either of which is a half step away from HAL 9000...

        1. oolor

          Re: So much to do, so little time

          Re: AC

          Well put. The problem seems to be that we are trying to construct something that has a high probability of being correct when in reality the problem is more akin to finding the least wrong answer. Though these two seem similar, they are actually opposites.

          Re: Lee D

          Very well put with the exception of the Blank Slate invocation. As phil dude points out, primates have a brain structure that is ready to soak up language, see Pinker:

          http://en.wikipedia.org/wiki/Steven_Pinker

          ___________________________________________

          Regarding the problem of logic and neurons, they do not work in any way like electronics, there is no set on/off. Even the exact same neurotransmitter release does not always correspond to the same membrane potential actions. Nor do the same pathways being triggered lead to the same output in repeated trials.

          Modelling neural networks as circuits works for understanding how the different areas of the brain work together. It tells us nothing of the 'computations' going on. The brain is better viewed as an evolving massively parallel feedback system than a computer. Like systems composed of humans, it is designed to work well in the face of repeated errors and limit the costs of those errors rather than work out some Pareto Optimized Ideal. Perhaps this is where the true power lies and not in trying to get a 'correct' answer.

    3. Anonymous Coward
      Anonymous Coward

      Re: So much to do, so little time...

      "Yes we can programme a machine to play chess"

      Chess programs are a good example of how the definition of AI changes over the years. Back in the 70s & 80s they were seen as the cutting edge of AI , now they're just seen as the dumb brute forcers that they are (albeit with some finessing code added for end games). I suspect the same will go for computer vision and speech recognition in the future even though today it still has the Wow factor. For some people anyway.

      1. gazthejourno (Written by Reg staff)

        Re: Re: So much to do, so little time...

        I remember my dad was really into speech recognition when it first came out in the mid-90s, thinking it would speed up document writing no end. Unfortunately, the software not being able to cope with a non-received pronunciation accent, he ended up going back to the keyboard.

        Looking at the comedy howlers that Siri and the like still throw up today, I'm thinking the tech won't really move on at all.

    4. Michael Wojcik Silver badge

      Re: So much to do, so little time...

      What strikes me odd about AI, is that we really do not understand how our minds or brains function and yet we persevere in the hope that someone is going to progamme a binary computer which is has the same functionality as the mass of grey cells in our head.

      That has precious little to do with contemporary AI research. It might be some popular misconception of AI research, and it was true to some extent decades ago, but any researcher claiming these days to be attempting to emulate human cognition in toto is almost certainly a crank or charlatan.

      Since some time in the 1980s the overwhelming majority of AI research has been into approaches for practical flexible problem-solving in constrained domains, and cognate subfields such as natural language processing, deriving (claimed) facts from narrative descriptions, and the like.

  3. Tony W

    Not Turing's finest hour

    Turing was a very clever man but he hadn't grasped the open-ended nature of human conversation, with its almost infinite number of possible variations, so he underestimated the time before his test would be passed, and the amount of hardware needed to do it.

    I predict that the Turing test won't be genuinely and convincingly passed for very many years. And that's just as well. I can't see a lot of legitimate uses for a computer that can successfully fake a human life history and experience, but I can see some nefarious ones.

    1. phil dude
      Coat

      Re: Not Turing's finest hour

      I mean , who could have predicted the phase space of language that is twitter....!

      P.

    2. Mage Silver badge

      Re: Not Turing's finest hour

      Also it may be like a Program able to play Chess, that passing the Turing test can be solved without any recourse to AI.

      You know how Google Translate works compared to how people tried to build machine translation for 30+ years? A big rossetta stone and search. Not clever parsing and grammar.

    3. Charles Manning

      Was the Turing test suppoesed to be a real test?

      As one who has studied AI on and off for the last 30 years and have also read everything I can on Turing, I am unconvinced that Turing really thought the Turing Test to be a practical test for AI.

      Turing was often asked whether computers would ever be intellient. Directly proving a box of switches to be intelligent is hard, particularly since intelligence is itself such a hard concept to define, let alone measure.

      Turing was a mathematician and mathematicians love to use "tricks" to prove problems. One of those handy tricks is Reductio ad absurdum http://en.wikipedia.org/wiki/Reductio_ad_absurdum

      Devise a test that shows a computer is not intelligent. If the test fails to prove the computer is not intelligent, then we have to accept the opposite: it is inlelligent.

      The question is really: Did Turing intend this as a real test or was it just a way to cut through all the "can computers ever be intelligent" bs?

      1. Mage Silver badge
        Pint

        Re: Was the Turing test supposed to be a real test?

        Best put comment I've ever read on Turing Test and AI.

        Have a beer or liquid refreshment of your choice.

    4. Shaha Alam

      Re: Not Turing's finest hour

      true. only a human could devise a test for intelligence that allowed blaggers and charlatans to pass.

      maybe that's the real test for an ai - the ability to devise a test that can't discriminate between intelligence and clever mimicry.

    5. h4rm0ny

      Re: Not Turing's finest hour

      >>"but he hadn't grasped the open-ended nature of human conversation, with its almost infinite number of possible variations"

      At this point we find one of the biggest problems with the Turing Test. Not the limits of AI, but the limits of experience. We could create a machine that never made a mistep in terms of language use (in theory) but which we could still identify as a machine through the limitations of what it was familiar with.

      Consider the following:

      Questioner: "Where did you grow up?"

      Respondent: "I grew up in Manchester".

      Syntactically and stylistically a correct answer. However, if that is a machine responding we then get the following:

      Questioner: "Was it sunny there?"

      Respondent: "Oh yes, all the time."

      Again, syntactically and stylistically correct, but the machine just doesn't know. We are identifying it as a machine not because of lack of intelligence, but because it is forced to lie because it doesn't have the same repetoire of facts and experiences as a real human being.

      So the Turing Test really needs to be re-defined. If we can pass the components of understanding questions, using language correctly, then that should be a pass. Incorporating experience into the test pushes back any possibility of passing to absurd levels regardless of the sophistication of the program.

      1. Squander Two

        Manchester @ h4rm0ny

        > We are identifying it as a machine not because of lack of intelligence, but because it is forced to lie because it doesn't have the same repetoire of facts and experiences as a real human being.

        While I appreciate your broader point, I don't think this example shows that we're dealing with a machine at all. It might show that we're dealing with a liar. Or just some confusion: for instance:

        Questioner: "Where did you grow up?"

        Respondent: "I grew up in Manchester".

        Questioner: "Was it sunny there?"

        Respondent: "Oh yes, all the time."

        Questioner: "In Manchester? Are you on crack?"

        Respondent: "Oh, sorry, are you English? I meant Manchester, Tennessee."

        At which point we have to go and check whether such a place even exists and what the weather is like there, and even then we still don't know whether we're dealing with a machine or a liar. Exactly these kinds of tests are used to spot spies, after all.

        Which raises the very interesting subject of what kind of knowledge, if any, is inherent and universal to being human, and can we base questions on that knowledge in such a way that right or wrong answers are easily spottable. I think the best candidates are probably things like "Tell me about the last time you stubbed your toe" or "Do you prefer Spring or Summer?" But only as long as we assume intelligence has to mimic humanity. AI is certainly conceivable that is genuinely intelligent but has very little shared experience with us. In which case, what the fuck do we ask it to verify that it's intelligent?

        This is where the Turing Test is quite perceptive, I think: it recognises that we may not actually have the ability to recognise intelligence per se.

        1. Sir Runcible Spoon

          Re: Manchester @ h4rm0ny

          I would find an AI more intelligent if it started asking the questions instead of just responding.

          My personal on-screen favourite AI has to be Jarvis from Iron Man, nicely sarcastic with a comic sense of timing.

  4. Anonymous Coward
    Anonymous Coward

    Wouldn't it be beautifully ironic...

    ... if one day a cyborg materialised from the future and shot him?

    1. Steven Raith

      Re: Wouldn't it be beautifully ironic...

      If Kevin Warwick is the saviour of humanity, a la your Terminator-esque scenario, then quite frankly, we deserve to die.

      We all deserve to die.

      Steven R

    2. h4rm0ny
      Joke

      Re: Wouldn't it be beautifully ironic...

      >>"... if one day a cyborg materialised from the future and shot him?"

      Even better if just before it fired it let out an anguished "Nooooo! You are not my father! That's impossible!"

  5. Anonymous Coward
    Anonymous Coward

    I am so glad I studied CompSci at a different university. Reading was my 2nd choice at the time. Talk about dodging a bullet of being academically associated with such charlatanism.

    1. Uncle Slacky Silver badge

      Same here (except Physics was my first choice). To be fair, that was (just) before he arrived at Reading. The old Cybernetics lab had a great mad scientist/boffin vibe, though.

    2. Nick Ryan Silver badge

      I "dodged" that one as well... specifically the cybernetics course at Reading. In the end I avoided AI as much as I could because I quickly considered that none of what was being taught as AI was in fact AI: at best it was Logical Reasoning.

      As for Professor Warwick, I consider that he's a very good promoter of the subject, rather over-enthusiastic at times, and he does, in his own way, raise the profile of a lot of interesting problems that could do with being raised - for example the boundaries between human and machine. Eccentric, out-spoken, often technically wrong but largely harmless.

      It would be interesting if after all this time he could be persuaded to directly speak with El Reg...

      1. Sir Runcible Spoon

        Dodge the bullet

        I didn't :(

  6. Crazy Operations Guy
    Joke

    Screw artificial intelligence

    We should be finding a way to install actual intelligence into Captain Cyborg.

    1. Anonymous Coward
      Anonymous Coward

      Re: Screw artificial intelligence

      "We should be finding a way to install actual intelligence into Captain Cyborg."

      He's getting paid to produce this rubbish. Most people have to do real work to earn a wage.

      Who's the intelligent one?

      1. Primus Secundus Tertius

        Re: Screw artificial intelligence

        Q: Who's the intelligent one?

        Good question. Before I retired, I felt jealous of people drawing large salaries as professors of Marxism while I was relegated to honest toil.

      2. Anonymous Coward
        Anonymous Coward

        Re: Screw artificial intelligence

        "He's getting paid to produce this rubbish. Most people have to do real work to earn a wage."

        You obviously have not encountered the women in the HR department where I work.

  7. i like crisps
    Go

    Kevin Warwick..

    ...did he do some "moonlighting" as a Ewok in Return of the Jedi.

    1. Tom 38
      Stop

      Re: Kevin Warwick..

      Crikey you are a cretin aren't you. A gay joke and a dwarf joke in one thread, nothing else useful to say? Why don't you try keeping your idiotic "humour" to yourself.

  8. frank ly

    I'm working on it

    "He installed a chip in his arm, for instance, and claimed that he had become the advanced guard of the Terminators thereby."

    I've installed lots of chips in my stomach and bathed them in a special alcohol solution as a fuel source. Nothing's happened yet, so I think they need more fuel liquid.

    1. Steven Raith

      Re: I'm working on it

      You're doing it wrong - you need a powerful fusion heat source.

      A decent phaal or north indian garlic chilli chicken should do it.

    2. Charles Manning

      Re: I'm working on it

      You need to present your findings like an acedemic would.

      Finish off the conclusions with the statement: More research is required.

      You might also suggest some alternative directions for future research, different types of fuel mix for instance.

      There's a whiole lot of funding looking for good research projects, can't find it and therefore gets spent on finding out why people with long hair prefer carrots to peas.

  9. i like crisps
    Go

    "he installed a chip in his arm"

    So basically he's just like Robocop now.

    1. Anonymous Coward
      Trollface

      Re: "he installed a chip in his arm"

      Nah, it's just there to balance the one on his shoulder.

    2. Tom 35

      Re: "he installed a chip in his arm"

      If you pull his finger, the garage door opens.

      1. Fred M

        Re: "he installed a chip in his arm"

        Opening the garage door is exactly what I do with mine. (See video part way down the page.) More stuff to come, but I've only had it a couple of weeks.

        http://0xfred.wordpress.com/2014/05/23/my-nfc-implant/

    3. Anonymous Coward
      Anonymous Coward

      Re: "he installed a chip in his arm"

      Robocock

    4. Smallbrainfield

      Re: "he installed a chip in his arm"

      It has his name and address on it. It's so the vet can tell who Warwick is when he gets lost.

  10. User McUser

    Transcripts?

    Where can one obtain said transcripts? I would very much like to look at them and I imagine others would as well.

Page:

This topic is closed for new posts.

Other stories you might like