back to article Today in bullsh*t AI PR: Computers learn to read as well as humans (no)

Researchers from Microsoft and Chinese cyber-souk Alibaba separately claimed this week that their artificially intelligent software is as good as, if not better than, humans at understanding the written word. Journos fell over themselves to breathlessly report that, for instance, "ROBOTS CAN NOW READ BETTER THAN HUMANS, …

  1. ThatOne Silver badge
    FAIL

    Huh? IMHO, to learn to read you have first to learn the language coded in the text you want to read, and we're still quite far from that. Everything else is just pattern matching.

    1. veti Silver badge

      "Just" pattern matching? Putting "just" in front of something doesn't make it simple.

      The computers have no idea what the words actually mean. The definition doesn’t matter. To the software, it’s all matrices of numbers linking similar strings of characters.

      How exactly does that differ from a child learning these facts in school?

      OK, so the computer identifies a string "Mexico". As it's exposed to more and more inputs, it learns that certain ideas are, more or less closely, associated with this string. It will learn that there is a "Mexican government", and "Mexican president", and "Mexican border" and people and history and whatnot, and from this it will know to categorise "Mexico" alongside "the US", "Canada', 'China' etc. in the class of things that are called "countries'. More specifically, it will learn of the things that are associated with Mexico in particular: the Spanish language, Aztecs, burritos, tequila, Zorro. As it grows more sophisticated and discriminating in parsing its inputs, it will note that many American sources talk very negatively of "Mexican immigrants", and of NAFTA, and much more.

      By that time, I'd say its "idea what the words actually mean" would be better than that of most humans.

      I realise that this level of understanding has yet to be demonstrated. But I'm not seeing any qualitative jump in getting from here to there. It's "just" a matter of feeding in more data.

      1. RJG

        missing the problem

        You're not quite seeing the problem.

        In the given example the "AI" sees the answer "the Mexico-US border" as correct

        The answer "The US-Mexico border" it would see as wrong.

        It doesn't understand the meaning of the words, it just does basic pattern matching..

        1. jmch Silver badge

          Re: missing the problem

          "In the given example the "AI" sees the answer "the Mexico-US border" as correct

          The answer "The US-Mexico border" it would see as wrong."

          Absolutely. And if the question asked was "what country is adjacent to California?" without reference to the keywords 'south' or 'border', a human would work out the answer while (this) computer would be stumped.

          1. Anonymous Coward
            Anonymous Coward

            Re: missing the problem

            And if the question asked was "what country is adjacent to California?" without reference to the keywords 'south' or 'border', a human would work out the answer while (this) computer would be stumped.

            But if you asked someone from Nebraska "what country is adjacent to Texas?" they might say "New Mexico."

          2. Daniel von Asmuth
            Headmaster

            What is the name of the Mexico-US border?

            Let's guess: its the 'Great Trump Wall'.

            At least the border betwwen India and Pakistan seems to have a proper name, the 'Radcliffe Line'.

      2. John Smith 19 Gold badge
        Unhappy

        "But I'm not seeing any qualitative jump in getting from here to there. "

        The fact you don't means you wouldn't understand the answer.

        Such a level of both ignorance and enthusiasm.

        Do you actually work for a company trying to sell this?

      3. Lee D Silver badge

        If is was "just a matter of feeding in more data", Google would have the world's best AI running across their datacenter already.

        Sadly, it's not that simple. "AI" as you know it at the moment is just the same as it ever was... progress in the field is limited and has been allowed mainly because of commodity hardware but what they've found is that - though they can throw much more parallel, much faster, much more powerful, much more prevelant, much cheaper hardware at it - it doesn't change the fundamental nature of what it is: A statistical model.

        Statistical models are not "intelligent", they don't "learn" as you expect. Quite often 99% of the gain is in the first 10% of the training and then very little else changes and it takes much longer to "untrain" it in order to show it exceptions that it had never seen before. And, at the end of the day, nobody is quite sure what it's trained itself to at all. It might be statistically correct most of the time, but it's not trained.

        If it were just a case of throwing more hardware and time at it (time being much more important, I would posit, literally just training it 24/7 for decades), then we would have a Bitcoin-like economy where companies were fighting to throw as much time and power at a basic AI as they could to be the one with the most well-trained AI. Amazon and Google would lead the entire scientific field. Places like CERN would exist just to train AI en-masse.

        But that's not how it works. Or how the technology works. Or how we even believe it could work. All the "AI" you know isn't... it's closer to a heuristically-determined expert system. We've had those since the 1960's, and though computing power has increased by factors of BILLIONS in some circumstances since then, not to mention that's just a single computer and the ability to scale the AI to billions of computers exists, it hasn't really got much better at all.

        IT's like saying that the way to train a child is to throw as many books as possible in its direction. Literally bury the poor sod under literature and expect him to be an expert in everything from Shakespeare to quantum mechanics. Kid not smart enough? MORE BOOKS! Kid can't read yet? MORE BOOKS! Kid gets something wrong? MORE BOOKS! Kid biases towards a certain answer? MORE BOOKS!

        That's not how it works with real intelligence, and it's certainly not how it works with what passes for AI.

        Everything "AI" you ever seen, from Alexa and Siri to artwork-creating robots, Google image detection, whatever you've seen at CES or any other show: It's the same thing. A statistical model, trained on a data stream that, after a very short period of time, has increasingly poor gains for the time/effort/resource/training it requires to add on criteria or more data. Literally they plateau very quickly after becoming vaguely useful, and then progress drops to nothing.

        And without the human-led training, they are even worse. I can knock up some Java code - like many of my peers from CS courses did in the 90's - to show you neural nets, genetic algorithms, all kinds of stuff that will demonstrate "learning" behaviour. Right up to the point where you need it to do something slightly complicated. At which point the returns diminish to nothing.

        There's a reason that most of the AI in the field lasts precisely the length of a PhD research project and then dies a death - do it, get results, realise that's the best you're ever going to get, write a paper, run away from the entire field.

        1. John Smith 19 Gold badge

          "to artwork-creating robots, "

          Funny you should say that.

          At lease one artist programmed a system to be a creative artist, using a plotter as their canvas. Interestingly it was in his style.

          Likewise one of the theses (looks wrong but I can't be ar**d to check if it should be "Thesi" or even "thesii") on the idea of (basically) "English as a computer language" in the mid 70's retired (at UC London) and retrained as an art historian.

          1. Named coward

            Re: "to artwork-creating robots, "

            @John Smith: The plural of thesis is theses, whether you're writing in english or trying to use the latin plural

            1. John Smith 19 Gold badge
              Unhappy

              "The plural of thesis is theses, "

              As I noted it looked wrong, but in truth I didn't think it was, and I could not be bothered to check further.

              Thanks for confirming I was right and I'll keep it in mind.

              1. Anonymous Coward
                Anonymous Coward

                As I noted it looked wrong

                That's why we nailed those theses to the door, Martin.

        2. Anonymous Coward
          Anonymous Coward

          "doesn't change the fundamental nature of what it is: A statistical model."

          Indeed. And no matter how clever those models are or what algorithms they use (SVMs, decision trees, linear regression, neural nets) those models only do one thing. Train a model to recognise faces and these days it can. Brilliant. Now let the same model look at pictures or cars and it won't have a clue. Yes, you could train it on cars too, but the more you train it on diverse subject matter the less accurate its predictions become.

          Currently there is no algorithm or neural net that even gets close to representing the human brain. In fact current neural nets arn't even structured in the same way. Whether that matters remains to be seen but there is something fundamentally missing yet. You can't just point a neural net at the world and say "go on , learn about and contextualise your enviroment" like as insect can manage, never mind a human. And this problem won't be solved by ever faster hardware. There needs to be a fundamental structural change in the way these systems work.

          The singulairity is a long way off yet.

      4. jmch Silver badge

        "How exactly does that differ from a child learning these facts in school?"

        The truth is that even the most advanced research into neuroboffinry* has very little insight into exactly how children learn.

        *you know what I mean

        1. Anonymous Coward
          Anonymous Coward

          *you know what I mean

          I do. But those AI programs wouldn't.

      5. Anonymous Coward
        Anonymous Coward

        "As it's exposed to more and more inputs, it learns that certain ideas are, more or less closely, associated with this string."

        I think you are conflating text with idea. They are not at all the same. Reading the article, I'm not seeing anything beyond pattern matching and text recognition.

      6. Eddy Ito

        How exactly does that differ from a child learning these facts in school?

        While this may often be how standardized testing works in schools, actual learning is a bit more complex. Learning to walk for instance is something people do but it's not something a machine needs to do as it's easily programed. It never needs to know what it's really doing just react to a series of inputs, it doesn't know what walking even is other than a set of routines. This is the same thing. The only difference is that it has to do some OCR to recognize "words" and keep it all in a buffer. Then it's getting a question that, to it, might as well mean 'give me the text nearest the words "Southern California" and "abbreviated"?'

        The amazing part to me is that the human score is so low but I'd imagine most folks would be pert near perfect if they got to keep the entire text in an exact "buffer".

    2. Primus Secundus Tertius

      "just pattern matching"

      In other words, "all you have to do is..." [recreate the results of 500 million years of brain evolution]

  2. Anonymous Coward
    Anonymous Coward

    Same type of low-hanging-fruit reporting from CES

    Another Reg article called out the MSM recently as ''clickbait-chasing journalists recycling press releases'. Even the BBC is at this game! Compare these 2 articles about CES. Easy to tell which is fake news:

    .....Tech preview of the show's coolest new products:

    http://www.bbc.co.uk/news/technology-42574569

    .....Left wondering whether AI is a triumph of marketing:

    http://www.bbc.co.uk/news/technology-42619807

  3. Sampler

    As someone who's day job it is to keep an eye on such things, I'm deeply saddened by the progress.

    I work at the tech side of market research and having computers read and categorise lengthy human comments is a goal of our companies, as our competitors offer it, only, our competitors use the tools available and not one of them, in my view, is capable enough to do this accurately enough for me to be comfortable selling it, but they don't care, as the old adage in market research goes "everyone lies, but it doesn't matter, because no one is listening".

    I was hoping from the article headline we might be a step further, appears we're a step to the side instead..

  4. Johnny Canuck

    What if you asked it "What country borders California" or "Where is the border of another country in relation to California"?

    1. veti Silver badge

      Type "what country borders California" into Google, and it returns the correct answer highlighted in the top search result. Is that AI?

      (Answer: no, it's general knowledge, exactly what Google excels at. But it demonstrates that it's parsed the question correctly, and is able to identify the information that constitutes "the answer".)

      1. Anonymous Coward
        Anonymous Coward

        That's the simplest possible type of question you could ask. A simple fact. Ask it "what rivers run through California and another state?" and Google is clueless. The closest it comes to being useful is providing a Wikipedia link to the list of rivers in California. You can click each one, one by one, and see if it runs through another state to compile your list. Google isn't able to do that for you, even though it is something even a four year old could accomplish.

      2. DropBear
        WTF?

        " it's parsed the question correctly"

        Absolutely not. It merely noted that "country" tends to be statistically closely associated with "state", and in the wikipedia article that most people statistically tend to click on in the search results there is a sub-heading containing both "border" and "states" and it quoted the first paragraph from that. "Parsed"...? It parsed precisely nothing.

  5. Anonymous Coward
    Anonymous Coward

    What is the name of the border to the south?

    Berlin.

    1. David 132 Silver badge

      Re: What is the name of the border to the south?

      Burma!

      (I panicked.)

  6. TReko
    Unhappy

    Clippy version 2.0?

    The fact that it doesn't work won't stop Microsoft from using it to field tech support queries, and save money on real people doing real support.

    I hate these pop-up virtual assistant windows on tech support websites that pretend to be people. They don't work.

    Microsoft's support forums in recent years seem to be filled with boilerplate, unhelpful answers, too. Google or Quora or Reddit normally are way better.

    Thanks for the clear explanation, as usual, el Reg!

    1. Anonymous Coward
      Anonymous Coward

      Re: Clippy version 2.0?

      Yes, thanks for the explanation.

      When I saw the headline yesterday (or whenever) I remember thinking for a moment, "Oh, that sounds important". Then I thought for a little longer, and my ***comprehension*** kicked in. So I just moved on.

      By the way, terrific illustration! A little reminiscent of Dilbert's robot "coworker". (Does that word remind anyone else of bovines?)

  7. Pete 2 Silver badge

    Does an AI's lips move when it reads?

    > The answer to every question is explicitly contained in the text. It's not so much reading comprehension as text extraction. There is no real understanding of the prose by the machines; it’s a case of enhanced pattern matching. Human beings are smarter than this.

    Errrrr, some human beings are smarter than this. I would suggest that there are millions (in the UK alone) who are not. It is entirely likely that many of their jobs will be at risk.

    Just as the Turing test is intended to compare AI and human capabilities, it does not imply that all humans would be able to provide responses that were at a sufficiently high level to be deemed "human".

    1. katgod

      Re: Does an AI's lips move when it reads?

      A thought, to be human maybe to have the wrong answer some of the time. Having the wrong answer may lead to more thought that could lead to more insight? Of course having the right answer won't always lead to someone accepting your truth and of course if you always believe what you hear then you are often mislead.

  8. John Smith 19 Gold badge
    Holmes

    I'd be impressed if they could learn to *proof read" better than humans...

    Given El Reg's nanny with buttplugs at Whymo engineers house story.

    Here's an idea, El Reg.

    Feed the story through a text to speech engine and listen to it.

    What your writers eye has skipped over your Editors ear will (probably) hear as a very dud note indeed.

    Hear the phrase "the locked were changed, " and you should immediately be thinking "WTF?" or "Warning, noun, verb disagreement. Parse failure," if you've just waded through a load of English parsing AI papers and wondering why the answer to "How do you parse 'police police police' " is not "WTF are you talking about? Explain yourself better."

    1. Primus Secundus Tertius

      Re: I'd be impressed if they could learn to *proof read" better than humans...

      My dream of AI would be a computer system that could listen to a meeting through one or more microphones. Shortly after the end of the meeting it would email a coherent set of minutes to everybody concerned.

      It will have established who was present and which other people should receive the minutes. It will have taken remarks that were made in the "wrong" part of the meeting and put them into the correct section. It will present the pros and cons for each proposal rather than a verbatim rendering of each speaker.

      I wait, patiently but with little expectation.

      1. John Smith 19 Gold badge
        Unhappy

        I wait, patiently but with little expectation.

        I refer you to the opening titles of "The Prisoner" after #6 says "I am not a number, I am a free man," and the new #2's reply.

  9. Milton

    Mr Darcy's motivations

    The claims are actually far more outrageously hyped than the article suggests.

    True reading comprehension would offer you a few pages of Jane Austen, depicting some pivotal events, conversation and the author's inner monologue, and then ask questions.

    This should include stuff like "How long does it take to get from Longbourn to Netherfield?" because a human will assess the text for an in-context answer (by means of transport of the historical era depicted). An AI is liable to be quite stupid enough to say "84 seconds" since that's how long it takes a Tesla at the legal speed limit to cover the distance.

    Less tongue-in-cheek, a human will know how to answer questions like "What are the contrasting character aspects of Mr Collins and Mr Darcy? Whom do you think Elizabeth Bennet respects more, and why?" "Does she like Mr Darcy as soon as she meets him? If not, what is her opinion?" "How does the Georgiana character influence the story, particularly Elizabeth's feelings and opinions of others?"

    You could make up a virtually endless list of questions, through which human readers will prove their humanity and intelligence by their answers—and which will leave so-called "AI" looking clownishly stupid.

    My point being (and quite unoriginal, I admit) that the difference between "extracting facts from text" and "comprehending the world and people and feelings" is an almost unbridgeably vast gulf.

    The AI hype—not only from marketurds who are paid to lie, but painfully earnest from technologists who completely fail to understand silicon's limitations—would be quite funny, if it weren't for the fact that so much of this torrent of drivel is taken seriously and reported on by media who, from their witless naivete, might as well be AIs themselves.

  10. John Smith 19 Gold badge
    Unhappy

    The 70's called. Contextual Dependency could answer questions on children's stories back then

    The trouble with this idea is that on the surface it sounds quite impressive.

    But as others have noted while the answer is directly embedded in the text it really is a case of simple pattern matching.

    Humans have some quite amazing abilities where language (both spoken and written) are concerned.

    1)Learn a language without being taught a formal grammar, in the computer language sense of the term (or often the rules of grammar, which is a different thing). This is fortunate, given various attempts to write full grammars for human languages are f**king huge.

    2) Parse without backup. This used to be thought impossible. Mitch Marcus, during the MIT "Personal Assistant" project (still waiting for one of these) demonstrated otherwise.

    3) Parse with minimal lookahead. Marcus reckoned you needed 3 "phrases" but later work at Edinburgh found that 2 were sufficient (but a phrase is not just a single word).

    4) Do it all without a complete dictionary of the language.

    Which is fortunate, given it would be obsolete the day it was declared "complete."

    Humans are really good at recognizing when something is a noun, verb or adverb, even if they've never seen that word before "I just drumped a can of lager off Jack" "Pass me the fribble, said Sarah" are legal sentences with (AFAIK) meaningless words in them, but you probably worked out "Drumped" is some kind of verb for some behavior (not quite sure what) and fribble is a noun for something (not quite sure what). So you know enough to reason about something you don't actually know anything about.

    5) Relating new stuff to what they already know or have experienced. With this person relates new stuff to what they know, building out a web of what they need,and relating the new stuff to the old.

    Note that property "enough." If you know someone only knows 1 person called John, you (probably) know who they are talking about. If you know they know N people called John (or Jon, or Johanne) you know you're going to have disambiguate and apply some context.

    Likewise if you're a molecular biologist and someone gave a paper on plasmids you'd know what they're talking about, as would a plasma physicist. Except that they would be thinking about a different kind of plasmid, and vice versa. This is one of the reasons why any approach starting "We build a complete lexicon of the language" is likely to fail, as is something that does no spell checking/correcting (look at the number of short words where a single letter change changes the word, or the context of the sentence, completely).

    Humans brains are multi layer neural nets, but they have the ability to restructure that net based on new input, not just to add another word, but another whole concept, that can provide leverage for all future sentences. Instead of (To be. Verb, Personal,Impersonal, Plural, Historical, Current, Future) it spaws a "Subnet, verb detecting" to find one, and a "Subnet, verb handling" to identify what it is and how to deal with it in terms of the persons view of the world (which I guess is what a verb "means")

    I don't think any software efforts have tackled this, and until they do they are always going to be limited.

    Note, despite all those 100s of 1000s of word dictionaries different systems have created or look through the fact is about 50% of all English text is made up of 300-350 words, most of them < 8 letters long. The "Semantic Primes" theory has identified 63 words (or concepts) that are universal across all known languages, and which every other word or concept can be expressed in (of course it's taken since 1972 to build that list, which is a bit long for the average AI research grant application).

    Do you think, IDK, maybe figuring out how to leverage that fact, and those 300 odd words, could be quite a good idea?

    $Deity, wading through all that ancient AI s**t is depressing as f**k. The endless philosophizing and psychobabble. Computational lingusticians are worse than economists for forming tribal groups.

    I really do need to get a life.

    1. tiggity Silver badge

      Re: The 70's called. Contextual Dependency could answer questions on children's stories back then

      @ John Smith 19

      Re point 4.

      Indeed, Jabberwocky (poem, not the film obv!) is a great example of this. The "nonsense" makes "sense" because you can realise that e.g. Bandersnatch is a "living thing" (proper noun)in context. Add it the skilled creation of nonsense words that, due to similarity to parts of other words, can give a targeted meaning e.g. frumious (as in frumious Bandersnatch) - can arguably give a feeling of the word encompassing (amongst possibly other meanings) furious due to superficial similarity

      An AI would have a huge problem with that poem (& also not enjoy it!)

      1. John Smith 19 Gold badge

        Indeed, Jabberwocky (poem, not the film obv!) is a great example of this.

        Exactly.

        Although (AFAIK) no such thing as a "Bandersnatch" exists you can still do limited reasoning about what it is.

        So there's a this-is-a-noun process running, a stuff-we-know-about-nouns-whatever-they-are process and a stuff-we-know-about-nouns-of-living-things.

        That doesn't mean people don't build a big dictionary in their heads throughout their lives, but it

        does mean they don't need it to begin with.

    2. Anonymous Coward
      Anonymous Coward

      Re: The 70's called. Contextual Dependency could answer questions on children's stories back then

      AI image recognition shows that "learning" without much context (or should that be a ton of context through the real world, but the ability to know what to focus on) needs tons of memory and computation for a simple static 2d image.

      And those algorithms can be tricked with a single erroneous pixel. So something more subtle as language would need a lot, lot more.

    3. Martin
      Thumb Up

      @John Smith 19 Re: The 70's called. Contextual Dependency blah blah...

      Excellent post - thanks.

      Just one more point. Human beings are remarkably good at deducing meaning from context - and one of your sentences with made-up words demonstrates that elegantly.

      "I just drumped a can of lager off Jack" - if I heard one teenager say it to another, it would be obvious that "drumped" is a slang word for "cadged". And I wonder if AI will EVER be able to do that.

      1. John Smith 19 Gold badge

        if I heard one teenager say it to another, it would be obvious...slang word for "cadged".

        Well strictly speaking you can think this is what it probably means, and if you're wrong you can reconsider, because that's what humans do with words they haven't seen before.

        The paradox with "intelligence" is we know a lot about the hardware it runs on, and we know it's a long way from the world of registers, RAM, MMUs and so on of anything approaching a conventional computer architecture (and by these standards all computer architectures are "conventional.")

        And yet we can split the task into (apparently) higher level functions that don't seem to map to the model neural networks we can run on computers.

        Hmm.

        Perhaps we should consider the idea that the human NN operates like a VM for some kind of "higher level" representation that breaks the problem into smaller parts?

        1. Anonymous Coward
          Anonymous Coward

          Re: if I heard one teenager say it to another, it would be obvious...slang word for "cadged".

          The human brain might be NN all the way down. It's just current tech/hardware would require the "million processor" style setup to even begin to get close.

          How many brain cells vs connections are there again? All the "we only need X number of processors" always forget the connectivity involved is usually closer to the internet than a supercluster.

          1. John Smith 19 Gold badge
            Unhappy

            How many brain cells vs connections are there again?

            I think the commonly used numbers are ax10^10 neurons with 1x 10^14 connections. IOW 1 neuron could be receiving input from 10 000 other cells.

            IIRC the usual rule of thumb for MOS transistors is they can drive up to 10 other inputs (but in digital logic you normally size them to drive however many you know you're driving, which might be just a couple).

            Back in the day 10 billion neurons was vastly beyond the simulating abilities of a computer. But today, with server farms of vast numbers. Note also the highest frequency brain waves are around 15Hz (although "clock frequency" is not really a useful concept in brains. There does not appear to be the equivalent of the "pacemaker" cell area found in a heart).

  11. T. F. M. Reader

    Did that without computers once

    Many years ago I had to sit for an examination in a language that I did not know, using a different alphabet (so no common erudition based on, say, Latin roots was aplicable). At that time I knew the alphabet and maybe 20-30 words (unsure of spelling in some cases). The exam was supposed to determine my level for further studies.

    The written exam consisted of a text one was supposed to read and some questions one was suposed to answer (just like SQuAD, judging by the description in the article). I could not read the text (I managed to read the title, and I recognized a few simple words). I could not understand the question, either. However, for each question I was perfectly capable of finding the corresponding sentence in the text and copying it for the answer.

    Result: perfect score. Without any reading comprehension at all.

    1. tiggity Silver badge

      Re: Did that without computers once

      I recall a similar test, albeit language was Icelandic (which due to repeated invasions of UK meant we had a few Nordic loan words / fragments in English use, so a few word meanings could be inferred before having to use other analytic processes

  12. Grant Fromage
    Coat

    The usual potential " jobs threatened" stuff

    The jobs that should be threatened are the PR "creatives" (oxymoron) that generate this poo without getting a `wee slappin` for it as well. (As my old mate from Glasgow styles it.)

    Just IMHO..

    1. I ain't Spartacus Gold badge

      Re: The usual potential " jobs threatened" stuff

      Actually the marketing is the creative bit here. It's the journalists / churnalists that could be replaced by an AI. All they're doing is changing the order of the press release a bit, so it doesn't look like they copied it. Including putting a few bits in quotes from the bit of the press release that is a faux interview with one of the researchers, to make it look like they've done an actual interview.

      Then they go to their house style guide that says AI stories are reported as "threat to millions of jobs" and paste that bit in.

      Then they just need to complete the piece. To show their journalistic integrity. This means you can't have a story from just one source! So you go to Twitter, find a relevant quote from someone, paste that in. Job done.

      I'm sure someone could write code to do this in a couple of hours. That's assuming papers like the Express and Telegraph (that seem to have sacked most of their newsroom) haven't done this already...

      1. Primus Secundus Tertius

        Re: The usual potential " jobs threatened" stuff

        I was shown around the Telegraph on a visit a year or two ago. Or rather, shown into a gallery from which we could look out over the vast publishing room, full of computers and people but not a sub-editor to be seen.

        They seem to be lacking something as an employer. Whenever I read The Times, I recognise so many names as former Telegraph writers. Murdoch must be doing something right.

        I am surprised that El Reg's former journo, Chris Williams, has survived so long at the Telegraph.

  13. John Smith 19 Gold badge
    Coat

    But what about the day when it can read RSS, precise the feeds and write an article.

    Journalists.

    Be afraid.

    Be very afraid. *

    *No I'm f**king with you. IRL this is far away as fusion, and has been since about the first attempts at fusion.

  14. a_yank_lurker

    RI vs AI

    Real intelligence is the ability to make reasonable inferences from limited current data, previous experience, learning, etc. to make good decisions. In the case of any human language real intelligence can understand meaning based on context, word order, etc. and when two texts (in this case) give the same answer or at least a plausibly correct answer. The southern border of California can be described as the US-Mexico, Mexican, California-Mexico, etc. border depending on context. A human would be understand they refer to the same line on the ground.

    AI, correctly Absolute Idiocy, is just fancy pattern matching and database querying. There is understanding of context or the possibility there could be more than one correct answer to the question. Also, it has no ability to make reasoned decisions.

  15. Mike 16

    Which border?

    Sounds like the test would be a bit like Sister Mary Discipline, who would ding you (perhaps literally) for using a slightly different (but valid) word order, or even for not pausing the precise amount of time she associated with a comma in the One True Answer from the Catechism.

  16. hellwig

    What is the name of the southern border?

    Would the machine have still returned that answer? It would have to understand language to know that "southern" is logistically equivalent to "to the south" in this context, even when re-ordered to proceed the noun. That's how our human brains work.

    If we all talk like robots, should it not be easy for the robots to understand us?

  17. dbayly

    Another , more promising, approach

    There are other AI natural language processing approaches that are not based on machine learning type pattern matching. I am much taken with

    http://www.cortical.io/

    It's still arguable if the program understands anything. But the deeper we dig into "intelligence" the more that seems to be so for human, IMO.

    1. John Smith 19 Gold badge
      Unhappy

      There are other AI NLP approaches..not based on machine learning type pattern matching.

      True.

      Cortico is not one of them.

      "Semantic folding" is very much based on chewing on a shed load of data to construct an N dimension vector and look for words that are "close" together in some sense. They are large binary sparse matrices, allowing much compression and the ability to run comparison using the sort of Boolean operators common in most MPU instruction sets, but they are absolutely in the "Throw a metric f**k load of text at it and something will come out" school. :-(

      Once I saw that I immediately thought of the Binary Neural Net work in the facial recognition system developed by London University called WIZARD. It's also got features in common with speech recognition approaches using "Time Warping" to cope with words spoken at different speeds (something else most humans can cope with up to a point).

      9 pages into Cortico's 59 page White Paper and my BS meter is redlining like a Geiger counter in the engine compartment of a Cold War era Soviet nuclear submarine. Lots of verbiage, little insight. Why does the word "Autonomy" keep flitting through my mind as I keep reading?

      I smell a BFR.*

      *Big F**king Rat.

  18. brotherelf

    Let's face it,

    we've all done a helldesk session or two, we've all met those users. Whatever the merit of current "AI", it can't be worse at understanding and executing a lavishly illustrated step-by-step guide on how to set up out-of-office replies.

  19. JeffyPoooh
    Pint

    Human Brain versus A.I.

    The Human Brain contains quite a few "hardware co-processors"; it's certainly not 100% Neural Nets for higher order "learning". Any Biological Brain Boffin could probably give you a two-page list of the ones that they've found so far.

    Most A.I. researchers don't seem to have hoisted aboard this basic factoid. Which is why they're always about 40 years further behind than they naively believe.

  20. harmjschoonhoven
    FAIL

    It would get interesting

    if AI could suggest a sensible answer to the question Do you beat your wife only on Friday?.

    1. John Smith 19 Gold badge
      Unhappy

      "Do you beat your wife only on Friday?."

      Indeed.

      Actually as that's a query the easy answer is to check the listeners internal model and answer it.

      That is an NLP problem, and illustrates that the questions "What does a sentence mean?" reduces to "Queries or updates on the listeners internal model of whatever it's about," even if the listeners internal model is "WTF are you babbling about." No model, no capacity to make a model, means the question (or any sentence) would be meaning-less.*

      Recognizing the question is actually a very tricky attack on your personal behaviour is rather more of a general AI question and (I'd suggest) much harder.

      *This should be pretty much SOP for NLP, but I've yet to find someone actually state it directly. I'd love a reference. I'm guessing Winograd did so, somewhere, but I've not had time to hit his thesis.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like