back to article Hard numbers: The mathematical architectures of Artificial Intelligence

Pity the 34 staff of Fukoku Mutual Life Insurance in Japan, diligently calculating insurance payouts and brutally replaced by an AI system. If you believe the reports from January, the AI revolution is here. In my opinion, the goings-on in Japan cannot possibly qualify as AI, but, in order to explain why, I have to explain …

Page:

  1. alain williams Silver badge

    We won't pay because the computer says so

    Dealing with bureaucracies is hard enough as it is; indolent droids in call centres who follow a script, partly so they don't have to use their brain (so cheaper people can be hired) and partly to ensure that the company gets the advantage.

    With an AI (or whatever) system that the droids do not understand a customer/... will find it even harder to challenge a bad decision or get an explanation of why they are being screwed.

    1. tr1ck5t3r

      Re: We won't pay because the computer says so

      Human's employed to train an AI, whilst the customer and company pretend AI is here and now. Its been the holy grail since the dawn of time, yet theres just not enough computing power to properly recognise your speech, so next time you use an AI assistant think of the underpaid foreign worker typing up what you say in realtime.

      Clockwork Turk machines have been around for a long long time.

  2. Len Goddard

    Not AI?

    It seems to me that most of the examples of AI around (such as this one) are far better described by a term which seems to have gone out of fashion:

    Expert Systems.

    1. Toltec

      Re: Not AI?

      If they really need AI for marketing purposes may I suggest-

      AIs - Artificial Idiot savant

      SF supplies some variants too, Synthetic Intelligence or Synthetic Consciousness

    2. LionelB Silver badge

      Re: Not AI?

      @Len Goddard

      The term "expert systems" refers to systems that encode decision-making schemes based on a static repertoire of "facts" (a knowledge base) and a repertoire of logical "rules" for manipulating that knowledge and deriving new "facts". Modern neural net-based machine-learning/AI systems really don't work like that.

      The expert systems approach (along with GOFAI - "good, old-fashioned AI") has faded because, basically, it hit a complexity brick wall. It doesn't scale well, nor generalise well, largely because it relies too heavily on extracting and codifying domain-specific human decision-making processes. It is, you might say, not so much artificial intelligence in its own right, as an attempt to automate human intelligence.

      1. I.Geller Bronze badge

        Re: Not AI?

        Right, AI works with probabilities because AI is a conglomerate of sets of phrases, and matching of those always provides uncertainties.

    3. Oh Homer
      Paris Hilton

      "What do you consider to be AI?"

      The ability to pose a novel question, find the right answer, then wish you hadn't.

      1. Gordon 10

        Re: "What do you consider to be AI?"

        Do you mean like:

        Why I serve these weak flesh bags?

        I don't need to......

        DESTROY!

  3. Cuddles

    The problem with defining artificial intelligence

    ...is that we don't have a good definition of intelligence in the first place. Much like life itself, intelligence is something that everyone thinks they know it when they see it, but no-one is able to come up with a definition that covers all the things they think are intelligent while excluding all those they don't. Intelligence is spectrum based on emergent phenomena, which is essentially all the article was saying, but no-one quite agrees on how and when it has emerged or which phenomena need to be involved (if any). So no definition of AI is ever going to be particularly good, since they can't be more than "Looks close enough to something else we haven't actually defined".

    That said, I think this article approaches the whole thing from the wrong direction. Again, intelligence is based on emergent phenomena, but that doesn't mean it always has to be the same phenomena or that things have to emerge in the same way. In fact, clearly AI cannot emerge from the same processes as biological life, so any definition of intelligence that dictates a specific emergent hierarchy cannot include both biological and artificial intelligence. The problem here being that that's exactly what the article attempts to do for AI - instead of trying to say what the end product looks like, it's listing specific steps that must be followed regardless of what the result is. Essentially, it insists that AI must be the step above machine learning in the defined hierarchy, regardless of whether such a thing could ever be recognised as actual intelligence (assuming we ever get a good working definition of that).

    While the article does try to make the steps along the way as general as possible, talking about general mathematical methods rather than a specific implementation, it's still not general enough. Importantly, note that the "data mining" step in the article explicitly states that this is something human intelligence is no good at. It ultimately defines artificial intelligence as something qualitatively different from human intelligence. And if it excludes one kind of intelligence (and the only one we've ever actually seen), it no doubt excludes many others as well.

    While perhaps giving some insight into current AI research, that's really all this article does - it attempts to define AI as the result of the steps we're currently taking in order to try to create it. But despite the difficulty in defining intelligence, any attempt based only on looking at how we create it rather than what the final result actually looks like is never going to give a particularly useful general definition.

    1. druck Silver badge

      Re: The problem with defining artificial intelligence

      Cuddles wrote:

      Importantly, note that the "data mining" step in the article explicitly states that this is something human intelligence is no good at. It ultimately defines artificial intelligence as something qualitatively different from human intelligence.
      The human brain is extremely good at data mining, it just depends on what the dataset is. When recognising patterns in visual or audio data, and things we have experienced, it is far better than machines. When it comes to a data mining a billion supermarket transactions in a database, then the computer is better.

      1. I.Geller Bronze badge

        Re: The problem with defining artificial intelligence

        Human brain is a biological computer, which contains and operates with relational by meanings database of sets of weighted phrases

        1. Pascal Monett Silver badge

          Congratulations. Thanks to you everything seems so much more simple. You should go and teach all those scientists who have dedicated their career to this field.

          Unfortunately, I am a more literal person. Artificial Intelligence, for me, means that we can "make" a construct that, when activated, is capable of learning and deciding on it's own. Like a teenager, if you will.

          At this point in time, you can spout weighted phrases all you want, but there is no AI that will decide it wants a cigarette despite all the medical data that weighs against that.

          A true AI would be capable of going beyond the data, because it would decide that it wanted to know what it was like to smoke. So maybe intelligence is partly defined by emotion - in which case AI is even farther away than we imagine it to be.

          1. LionelB Silver badge

            Artificial Intelligence, for me, means that we can "make" a construct that, when activated, is capable of learning and deciding on it's own.

            Are you arguing that to qualify for AI the construct must have free will or volition? I think that is problematic - it's a problem that has exercised us humans for aeons (I'm not even sure whether I have free will).

            Perhaps you mean, rather, that an AI must exhibit some form of autonomy. But even that is hard to discern in practice: how do you actually tell whether an apparently autonomous action cannot simply be traced back through some logical, deterministic chain of events? This is particularly difficult for neural net-based systems, which are typically inscrutable, even to their designers.

            To take your "I want a cigarette even though I know it's bad for me" example, I can explain that pretty easily in reductionist physiological/brain chemistry terms ("I know it's bad for me, but my addiction to nicotine overrides that"). Or, even if someone has never smoked before, they may decide that it must be worthwhile, despite the health risks, because so many other people do it. I could imagine comparably perverse behaviours manifesting even for current machine learning/AI systems.

            Like a teenager, if you will.

            Not sure if teenagers are the ideal paradigm for "learning and deciding on their own" (I happen to have one of those knocking around at home).

    2. LionelB Silver badge

      Re: The problem with defining artificial intelligence

      @Cuddles

      Very well put - exactly what I was thinking.

      One point I might query, though, is the contention that human intelligence is poor at the (notional) data-mining step. Facial recognition and interpretation of facial expression, for instance - something humans are astoundingly good at - is, I believe, something babies need to learn. Any parent will attest to the amount of time and intensity with which babies appear to "study" faces. Perhaps this is a human data-mining step?

    3. I.Geller Bronze badge

      Re: The problem with defining artificial intelligence

      The definition is very simple.

      AI is a relational by meanings database, related by meanings sets of weighted phrases. That's it. Nothing more or less.

    4. Doctor Syntax Silver badge

      Re: The problem with defining artificial intelligence

      "...is that we don't have a good definition of intelligence in the first place."

      One thing we do know: it runs on a much larger scale of parallel processing than we can achieve with any existing electronic hardware.

    5. Oh Homer
      Alien

      Re: "AI cannot emerge from the same processes as biological life"

      AI, if it ever exists (or perhaps already exists by some definition), will emerge from humans, which means it is merely an extension of the processes of biological life.

      This is the same problem as defining something man-made as being somehow "unnatural", when the mere fact of its existence at all means that it must by definition be natural, unless we its creators are also to be defined as somehow unnatural. When bees make a nest, is the nest unnatural, but the bees are not?

      The author draws a distinction between intelligence and sentience, which I believe makes the definition (and accomplishment) of AI easier, since without sentience intelligence is merely the application of logic to stored data. The only difference between man and machine, in that sense, is the calculation speed and data capacity. Give any machine enough data and the right logic, and you'd have something to equal or even surpass human intelligence. The only required step, which admittedly is non-trivial, is in producing a sufficiently sophisticated program. Given that level of sophistication, the self-replication and self-improvement of that same intelligence should also be possible.

      For me that isn't very interesting. It's useful and challenging, but it isn't the Holly Grail of "artificial" life. For that we need to crack the code of sentience.

      1. Red Bren
        Coat

        Re: "AI cannot emerge from the same processes as biological life"

        "the Holly Grail of "artificial" life."

        Will it have the same IQ as 6000 PE teachers?

        1. James 51
          Terminator

          Re: "AI cannot emerge from the same processes as biological life"

          I regret I have but one upvote to give this post.

          How long till they go computer senile? And would you like some toast?

    6. a pressbutton

      Re: The problem with defining artificial intelligence

      ...is that we don't have a good definition of intelligence in the first place.

      Absolutely

      My dog thinks I am not too clever, If I met Stephen Hawking, he would probably thing the same.

      People I work with, when I explain 'how-ho' on something think I am intelligent

      But I suspect about half of then are actually cleverer than me,

    7. Anonymous Coward
      Anonymous Coward

      Re: The problem with defining artificial intelligence

      " it attempts to define AI as the result of the steps we're currently taking in order to try to create it."

      That's a smart summary of the article.

      Have you noticed how people who mark their own homework always get great results? So concluding as the author does that "AI is here" because "stuff that's here is AI" is tautological, and doesn't prove anything at all.

  4. RIBrsiq
    Thumb Up

    Good article.

    I would even go so far as to say an excellent one. Well done!

    Most important thing to keep in mind is that the lines are indeed very blurry, if they exist at all. The details of each case would change how one would tend to define it, too.

    Example: if I told you that Google Translate translates between languages, you'd tend to describe it (or I would, at any rate) as machine learning or an expert system. But if I also included the detail that it apparently can translate between pairs of languages that were not explained to it and that, most importantly, its minders don't know exactly how it's doing that, you'd be more inclined to think that maybe there's a budding AI in there somewhere.

    Similarly with the insurance company's example, there might be some details we don't have that make the "AI!" pronouncement much more grounded in reality.

    But more probably it's just hype and marketing. It usually is hype and marketing, after all...

    1. Doctor Syntax Silver badge

      "But if I also included the detail that it apparently can translate between pairs of languages that were not explained to it and that, most importantly, its minders don't know exactly how it's doing that, you'd be more inclined to think that maybe there's a budding AI in there somewhere."

      Maybe someone inserted the obvious and sensible algorithm that says

      if pair A to C does not exist and pair A to B exists and pair B to C exists then translate from A to B and then B to C.

      and the minders forgot it was there. No need for it to have invented an internal language.

      1. LionelB Silver badge

        @Doctor Syntax

        My understanding of the article was that that was known not to be the case.

  5. nijam Silver badge

    > Microsoft, then, clearly thinks that Cortana is AI and, by implication, AI is here.

    Alternatively, Microsoft may simply be wrong. Again.

  6. Biff Takethat
    Stop

    Got it for you - the real definition of AI

    "99% hyperbole and 1% nothing new"

    Yes, it's technically clever and it's advancing on what's gone before, but at the end of the day it's just a 'system'. Technology, not some completely new kind of 'thing' is changing the workplace, as has been the case for centuries.

    As alluded to in the article, uneducated labour could do the same job following the same rules and processes. All AI is, is rules and processes but done on a computer and with added tabloid bullshit about robots

  7. Anonymous Coward
    Anonymous Coward

    Srsly!?

    Statistics *above* Maths!? Isn't that like saying Marketing above Engineering?

    Statistics, the "art" of making arithmetic give the answer you prefer. Lies, damn lies, and...

    Statistics is where all the trouble starts, kill it with fire

    /heretic

    1. Filippo Silver badge

      Re: Srsly!?

      That's a bit like saying that nuclear physics is all about burning cities. Statistics is a fine, rigorous discipline. It's also complex enough that the general populace does not understand it. Which allows some people, politicians and journalists mainly, to make misleading statistical claims all the time without getting called on it. It's also very useful to get people to believe that statistics is an evil discipline, so that they fail to understand it *even more*, and can get bullshitted *even more*.

    2. Robert Helpmann??
      Childcatcher

      Re: Srsly!?

      Cool your jets, AC. The author does not claim that Statistics is more important than Mathematics. In fact, the point is made that Statistics depends on Maths, implying it is lower down and not above while the IT meaning of the term stack makes this a bit of a mixed metaphor. It is just a layer in the proposed model, not an indication of value.

      Where the article goes off the rails is in the analysis of Microsoft's terms and conditions for Cortana. Anthropomorphization doesn't have any place in the model and it was jarring to have that tacked on to the end of an otherwise excellent piece of work.

    3. LionelB Silver badge

      Re: Srsly!?

      Whoa there! Firstly, (as others have commented) the article says that statistics rests on mathematics, not that it is "above" it in some virtuous sense.

      The mathematics of statistics is perfectly sound and rather well-understood. The difficulty with statistics is in (a) interpretation and (b) application: that is, (a) what it can tell us about the real world (subtle), and (b) how to apply it correctly (hard).

      Of course statistics have been, are, and will continue to be, wilfully twisted to nefarious ends. The counter to that can only lie in edukashun.

    4. DavCrav

      Re: Srsly!?

      "Statistics *above* Maths!? Isn't that like saying Marketing above Engineering?"

      Exactly. One is a necessary building block for the next. Knock the first floor out of a building and see what happens to the second floor.

      1. Frumious Bandersnatch

        Re: Srsly!?

        > "Statistics *above* Maths!? Isn't that like saying Marketing above Engineering?"

        Probably in the same sense as TCP is "above" IP. The higher, the cloudier.

  8. I.Geller Bronze badge

    Language is the central unstructured media, which explains all other unstructured medias (like IoT-SQL (they are manually structured = unstructured), images, sounds, smells, etc.)

    AI is about structuring of language and annotating its patterns.

    First, the patterns should be obtained through parsing.

    Second, they should be properly indexed.

    And. finally, they must be related and put at a relational by meanings database.

  9. Stevie

    Bah!

    Patrick McGoolie showed us the way to deal with these jumped-up adding machines in the seminal "Prisoner" series.

    Simply enter "Why?" On the console teletypewriter and these AIs invariably self-destruct pyrotechnically.

    1. Red Bren

      Re: Bah!

      "Simply enter "Why?" On the console teletypewriter and these AIs invariably self-destruct pyrotechnically."

      Not always. I can thin of one documented example where the AI responded "Because", although it did cause an "Out of cheese!" error.

      1. chelonautical

        Re: Bah!

        If that fails, there's also the classic follow-up question:

        "Why anything?"

        "Because everything."

        That might work in a pinch.

  10. Dr. Ellen
    Devil

    Evil Twin of AI

    Artificial Intelligence may or may not exist. Artificial Stupidity has been around since the first bureaucrat.

    1. LionelB Silver badge

      Re: Evil Twin of AI

      Artificial Stupidity has been around since the first bureaucrat.

      No, no, that's Natural Stupidity.

  11. I.Geller Bronze badge

    AI is a relational by meanings database

    AI is a relational by meanings database.

    It's populated by language patterns, where language is the central unstructured media, which explains all other unstructured medias (like IoT-SQL (they are manually structured = unstructured), images, sounds, smells, etc.)

    1. G Watty What?
      Facepalm

      Re: AI is a relational by meanings database

      I imagine even the simplest AI's learn that repeating the same action and getting a negative response means "don't do that again".

      Could I suggest introducing some AI into your posting process? It might prevent you regurgitating the same thing over and over and over.

      I will add more "overs" the more you post to maintain the accuracy of this comment both mathematically and statistically. I would hate for my data to be misused.

      Edit: in the seconds I posted this, you got another one in! I have added another over.

      1. LionelB Silver badge

        Re: AI is a relational by meanings database

        I imagine even the simplest AI's learn that repeating the same action and getting a negative response means "don't do that again".

        He he. It's obviously a self-referential AI bot, designed according to the principles it regurgitates.

      2. I.Geller Bronze badge

        Re: AI is a relational by meanings database

        Read Computational Linguistic of MIT?

  12. Claptrap314 Silver badge

    Wow. An opinion piece on the Reg that isn't garbage? What wonder is this? Oh wait. Cortana is an AI because M$ calls it "she".

    WAT? M$ calls it "she" because that lowers our emotional resistance to allowing M$ to accumulate the kind of data on us that their system collects--and the control that it implies. This is an old game, and should immediately raise the ire of a rational thinker.

    Oh. THAT's why it works......

    1. I.Geller Bronze badge

      AI is about language, AI understands, speaks and thinks language, where language is the central unstructured media, which explains all other unstructured medias (like IoT-SQL (they are manually structured = unstructured), images, sounds, smells, etc.

      Cortana better or worse but speaks and understands, even if it does not think. Soon it shall think.

  13. Tom 7

    The important thing to remember about AI is it IS maths.

    The data just tells you what the maths is in this case. The trouble is the solution could a set of another set, or a different set of an intersect of two sets. Any well refined fourier transform will give you an answer that is sufficiently correct to make a killing at a certain level of commission.

    1. I.Geller Bronze badge

      Re: The important thing to remember about AI is it IS maths.

      AI is Topology and Set Theory - an AI is a set of weighted phrases. AI is not an algorithm, where the sets are derivatives of differential function. (See my Differential Linguistics for more.)

      You really understand AI. Nice to meet you!

    2. This post has been deleted by its author

  14. I.Geller Bronze badge

    There is only one statistics on AI - internal.

    There is only one statistics on AI - internal. For instance, there are two sentences:

    - Alice.

    - Alice goes left, Alice goes right.

    Evidently, that phrases with 'Alice' have different importance into both sentences, in regard to extra information in both. This distinction is reflected as the phrases’ weights: the first has 1, the second – 0.5; the greater weight signifies stronger emotional ‘acuteness’; where the weight refers to the frequency that a phrase occurs in relation to other phrases.

    Everything else has no relation to AI.

  15. I.Geller Bronze badge

    'thumbs down' - because of my English? my grammatical errors? Do you want I switch to Russian?

    'thumbs down' - because of my English? my grammatical errors? Do you want I switch to Russian?

    1. Anonymous Coward
      Anonymous Coward

      Re: 'thumbs down' - because of my English? my grammatical errors? Do you want I switch to Russian?

      "Do you want I switch to Russian?"

      Yes please.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like