back to article Sorry to burst your bubble, but Microsoft's 'Ms Pac-Man beating AI' is more Automatic Idiot

Back in a bygone era – September last year – Microsoft CEO Satya Nadella told a developer conference: "We are not pursuing AI to beat humans at games." This week, we learned Redmond has done more or less that – lashed together a proof-of-concept AI that can trounce gamers at Ms Pac-Man, and snatch some headlines along the way …

Page:

  1. Lee D Silver badge

    Virtually nothing that says AI or "learning" actually is.

    It's all heuristics, instructions from programmers on "how to learn", in effect. And not at some basic coding level, but quite literally specified explicitly for the task at hand.

    AI, to me, is still interpreted in the same way as the old gaming adverts: "destructible environments" (so long as you don't go out of bounds, go too deep, shoot the critical plot structures, or actually expect it to turn to rubble), "realistic physics" (which is why you can make the enemy bounce a thousand metres in the air by getting him stuck on a door), "open-world" (so long as you don't try to go the opposite direction to your objective or mind being herded back in if you stray too far, and by the way, for mission 2 you have to go see John or you'll never get a mission 3 until you do).

    It's all rule-based and targetted. Google's AlphaGo strayed into something different, which is why it's newsworthy and pretty astounding. But you have to understand the game and the rules of the game to make those sub-agents do what you want in order to come to a decent play. And I guarantee you that the "master agent" isn't culling off useless sub-agents and creating unique ones of its own to try to fathom out the game.

    It's all hard-coded rules, left to run for a long time with an aim in mind. That's not AI or "learning", no matter how long you leave it running. Unfortunately, any sufficiently-advanced technology is indistinguishable from magic, so people do think that Siri is actually understanding them rather than some speech recognition that hasn't improved in decades (per cpu cycle), shoved into a search engine which returns colloquially-worded results.

    1. IT Poser

      Lee D,

      While I fully agree with everything you've said, I am left wishing that we had the ability to program humans how to learn, even in such a limited manner.

    2. De Facto

      Google's AlphaGo was not much different

      Monte Carlo Tree Search algorithm on the giant database of 30 million Go game moves, with similar weights increase or decrease for good or bad moves. About 100 computer scientists man years were needed to feed and train the AlphaGo database. Finally massively paralel brute force MCTS computing on many many servers to find the best winning strategy. Against one human brain without the supercomputer capacity to go through billions of combination in few minutes. Calling it AI?

    3. JT_3K

      I mean, I had a friend in my dorm in 2005 that wrote a self-learning neural network for chess in Java. I played it fresh and destroyed it. He left it running against itself overnight and it got better. By the end of the week I couldn't beat it. It's nothing new. Tack on a visual processing unit that can recognise things like placement of the different coloured ghosts and the most "profitable" response based on proximity of each (they each were coded with different "personalities"), and pretty soon it's making better calls than a human. I concur, it's not AI, it's brute-force.

  2. Anonymous Coward
    Anonymous Coward

    It's not very good AI

    It was setup with rules on how to play and just explored those given principles.

    1. Paul Kinsler

      Re: It's not very good AI

      Let this be a lesson to all you humans out there - if you try to read up on any of the rules for your next activity of choice, no matter how well you perform, we will be able to claim that the rules-reading means you are not "Intelligent" but just "Learning".

      1. Lee D Silver badge

        Re: It's not very good AI

        Humans read rules, interpret them and voluntarily stick to them.

        Machines operate in an environment where the rules prevent them ever doing anything else, so it literally bounds their possible actions.

        Morpheus knew this: You will always be faster... because they live in a world built on rules.

        1. Destroy All Monsters Silver badge
          Terminator

          Re: It's not very good AI

          Machines operate in an environment where the rules prevent them ever doing anything else, so it literally bounds their possible actions.

          Well, the point is to allow the rules to be generated, discarded, rebuild, recompressed, weighted, tuned, indexed. analyzed, briefed, debriefed etc. at runtime.

          So there are really are no fixed rules, except for the most simple reactive systems. It's like with the spoon, of which you have to realize there is none.

          Morpheus knew this: You will always be faster... because they live in a world built on rules.

          Morpheus reads a script...

          (More in the area of "logic-based rule-based systems" as opposed "let's mess around till it works rule-based systems", there is actually a nice softcover by Cambrdige University Press by Robert Kowalski (who injected the mathematical logic part into Prolog) on what "rule-based systems and logical thinking" are about: "Computational Logic and Human Thinking: How to be Artificially Intelligent". (PDF here)Seems to be a good intro and a contender for a 21st-century update on all those books with titles "how to think logically" etc. that go back to the 19th century at least ... well the book on what is called Port Royal Logic came out in 1683, I see.)

    2. badger31

      Re: It's not very good AI

      It doesn't have to learn to be AI. And machines learning doesn't make them (actually) intelligent.

  3. Field Commander A9

    I don't see problem with hard-coded knowldge

    Even human babies(more so in animal babies) come with a large collection of knowledge hard-coded in their DNAs, what's wrong with that?

    1. Captain DaFt

      Re: I don't see problem with hard-coded knowldge

      No problem.

      But the babies will go through life expanding their knowledge beyond their original instincts.

      This 'AI' is like the maze robot that's just hardwired to turn 90 degrees left when it encounters a wall. It navigates the maze, but learns nothing.

    2. Dan 55 Silver badge

      Re: I don't see problem with hard-coded knowldge

      When someone plays (Ms) Pac Man for the first time*, they have to learn that ghosts are deadly unless you eat the pills and then they give you points. They also have to learn how (Ms) Pac Man moves and so on.

      If an AI can do that, that's more impressive. If its got the rules already it might as well be a Lua script.

      * Well, less so now as it's common knowledge.

      1. P. Lee

        Re: I don't see problem with hard-coded knowldge

        >When someone plays (Ms) Pac Man for the first time*, they have to learn that ghosts are deadly unless you eat the pills and then they give you points. They also have to learn how (Ms) Pac Man moves and so on.

        Well, you learn facts. But is that Intelligence? How much intelligence (vs memory) do humans use when playing?

        Intelligence generally involves guesswork. Even without seeing the effect, do you guess that ghosts are bad? Do you guess that the aim of the game is to eat all the dots and that the flashing ones mean something special?

      2. Orv Silver badge

        Re: I don't see problem with hard-coded knowldge

        "When someone plays (Ms) Pac Man for the first time*, they have to learn that ghosts are deadly unless you eat the pills and then they give you points. They also have to learn how (Ms) Pac Man moves and so on."

        All of which you can learn from reading the rules summary on the screen bezel or watching the attract mode. For the most part those aren't trial-and-error issues.

    3. Anonymous Coward
      Joke

      Re: I don't see problem with hard-coded knowldge

      > Even human babies(more so in animal babies) come with a large collection of knowledge hard-coded in their DNAs, what's wrong with that?

      Babies come with remarkably few 'hard-coded' or instinctive responses. From observation of my own son, I remember only 4 behaviours that were there from birth and not learnt. They were: the ability to laugh or giggle, sneeze, cry and twist his mum around his little finger.

  4. Will Godfrey Silver badge
    Unhappy

    Situation Normal

    It seems any claim Microsoft makes falls apart once you examine it critically. This whole thing totally misses the point of 'learning'.

    Also, this game is based on response time more than anything else. Force the system to have the same times as a human (while the game runs at normal speed) then see how well it fares.

  5. d3vy

    Does the computer know it's playing a game or does it think it's trapped in a neon maze being chased by ghosts?

    Sounds like we need rights for AI to prevent further torture of these poor machines :)

  6. Anonymous Coward
    Anonymous Coward

    Problem?

    I can't quite see what the problem is here. If you make AI-based chess software, would you expect it to have to work out the rules of chess and the relative values of the pieces by itself? Or for a self-driving car AI, would you expect it to learn how to drive by eventually working out that it's better for it not to cause accidents?

    1. diodesign Silver badge

      Re: Problem?

      Yes - that's why it's called machine learning. Chess programs aren't AI. They are algorithms and preprogrammed patterns. Airbus autopilot isn't AI. It's algorithms and preprogrammed patterns. Compare this to DQN, which didn't even know what the game controller's buttons did. It was given video frames and told to get on with it. It had to figure it out from scratch. That's where AI wants to head if it's going to make anything remotely intelligent.

      Sure, the world runs on algorithms and preprogrammed patterns. No problem with that. Let's just not trumpet it as AI.

      C.

    2. Destroy All Monsters Silver badge

      Re: Problem?

      Indeed, Indeed.

      "Blank Slate" AI has died back in the 60s I think (so has "Blank Slate" cognitivie science I would hope).

      Well, "Artificial Life/Evolution" would be the closest to it nowadays, but that's a few layers down, so to say.

      Next up: The negative-externality-dropping Beast of Redmond creates Roko's Basilisk from old pieces of WinNT networking code. THE END!

    3. Lee D Silver badge

      Re: Problem?

      "work out the rules of chess and the relative values of the pieces by itself"

      Rules, maybe. But they can learn that by making random moves and some control somewhere says "Invalid move, you lose". That's INFINITELY better than "you can only make moves from the restricted subset we offer you that you never have to consider" in terms of learning.

      And, similarly, value is a heuristic. The value of a piece is nonsense compared to whether you win. You can sacrifice every piece on the board so long as you end up checkmating. That "value" could be learned or hard-coded. Learned value - when it decides itself "Actually, my queen is probably worth more than that" rather than adds up some metric - is what you're after if you're claiming "AI" and "machine learning".

      It's about what you test on. Are you testing "can this machine learn to play the game by itself" or are you testing "Can this machine find what we would call an optimal play in this heavily-prescribed world". They are claiming the former, but actually it's the latter.

      You have to consider this: If your machine is "learning" then you could throw it at Ms Pac Man and train it. You would then be able to move THE EXACT SAME PROGRAM to, say Pac Man 2000, not tell it what the difference is, and train itself towards optimal play for that WITHOUT TWEAKING.

      This program couldn't. It's been told what the value is and what to do, in limited means but it's been instructed. That's not "learning", that's some kind of "organic growth programming from seed". And the whole point of "learning" is not to make a Ms Pac Man player. Any idiot can do that. It's to make a machine that learns. If it only "learns" Ms Pac Man when it is hand-led, then it will forever need to be hand-led on every task it does.

      To be machine learning, it would have to arrive at that itself, naturally. Even if you start from zero knowledge, or from knowledge of ENTIRELY THE WRONG GAME. It should learn enough that it realises that.

      Otherwise, all you've made is a very expensive computer player, and nobody is going to care about your research, licensing your patents, etc.. Although we might call them "AI" players, they aren't. What people are after, the value we seek, the thing that makes money, the thing we don't have, the useful feature... is learning.

      And learning shouldn't need to be hand-held. Stick a new-born animal in a room and it will learn when/how it gets fed without any extra tuition. If you make a change to that, it will adapt to it. The "seed" is sown before it ever knew what task it was up against. And it learns and adapts to the tasks given from then on.

      1. Orv Silver badge

        Re: Problem?

        By these standards, nothing I did in school was "learning," and most of my classmates weren't intelligent. ...okay, maybe you have a point.

      2. cosmogoblin

        Re: Problem?

        Well said. I'd only add that you need SOME initial instruction - animals have the urge to survive, and hunger feels unpleasant so they eat to sate it. An AI gamer needs to be able to identify a goal (eg "more points") and associate that positively.

        Once you can start with only "I must win" and "this tells me if I'm winning", and learn how to win, I'd argue that's the goal that MS claim to have achieved.

    4. Anonymous Coward
      Anonymous Coward

      This is a non sequitor

      If all you programmed a computer chess player with is the rules about which pieces can move where, and the rules about check/checkmate, it would never become any good. How is it going to figure out for itself how to look ahead multiple moves, let alone how to prune bad paths to keep the search space manageable when looking ahead more than 3-4 moves. All this stuff is programmed into a chess playing computer. If it was actually AI it would figure all that out for itself.

      By hardcoding point values like -1000 for a ghost, and programming it to compute point values of moves and try to optimize, they've basically done all the "intelligent" parts for it, so it is just running a simple math formula to maximize point values. That's not intelligence in any way shape or form.

      1. Orv Silver badge

        Re: This is a non sequitor

        How is it going to figure out for itself how to look ahead multiple moves, let alone how to prune bad paths to keep the search space manageable when looking ahead more than 3-4 moves. All this stuff is programmed into a chess playing computer. If it was actually AI it would figure all that out for itself.

        When I learned to play chess, I learned a lot of that sort of thing from books and annotated examples of past games. I'm not convinced that strong AI requires working everything out from first principles. That's not how we expect humans to learn. We don't throw someone in a car alone and hope they figure out what the pedals do, what road markings mean, and that hitting things is bad.

        1. Anonymous Coward
          Anonymous Coward

          Re: This is a non sequitor

          OK, I'm willing to let the AI read chess books and learn strategy from them. But not have it programmed in.

          I will agree something is an AI if it can learn how to do something from reading a book, or watching others do it. Programming them how to do it, so it is simply applying scoring algorithms and being a "really fast idiot" searching massive solution spaces is NOT AI though. In such cases the intelligence came from the programmer, not the machine running the program.

  7. TReko

    Great explanation

    good, clear journalism!

  8. Milton

    No such thing as AI yet

    LeeD has it absolutely right. None of the programs touted by marketurds as "AI" is really anything of the kind. Like "cloud", it's a term trowelled onto anything corporations want to sell or make headlines with. Though I confess it is appropriate to see a term as vague as "cloud" used to describe a truly vague concept that has been morphing like a drunk amoeba since mainframe days.

    Not only is "AI" a nonsense given the nature of the coding—which could cover any combination of neural network simulation, reward-seeking, machine learning ad nauseam, but never, ever gets close even to the versatility of intelligence of a shrew—the fact that these much-hyped machines can succeed only at single extremely clearly-defined, rules-based tasks shows how hollow the claims of "intelligence" are. None of the so-called "AI" systems could even begin a Turing Test, but then again, none of them can emulate even the smarts of a tiny mammal, and given that the roots of the word "intelligent", and any attempt to measure or compare it are completely founded in our understanding of how humans and animals can perform—why are we even using the word?

    I'll believe you have a true AI when I can converse with it using real speech, the written word and a variety of images, discuss in real-time topics ranging from science to ethics to literature to butterflies to math to philosophy to marriage to religion—and come away after a couple of hours convinced that you lied to me, and that behind the screen was a well-adjusted, educated, experienced human being.

    Until then, while I appreciate and am impressed by some immensely clever programming and powerful silicon, talk of AI is pure marketing BS.

    1. Anonymous Coward
      Anonymous Coward

      Re: No such thing as AI yet

      That sounds more like artificial human intelligence - I suspect that is neither achievable nor very interesting. Of course artificial machine intelligence could get too interesting...

  9. Destroy All Monsters Silver badge
    Holmes

    Where is my lab coat and pipe?

    My dear academic fellow,

    I contend that this is squarely in the tradition of

    Learning Classifier Systems

    A better writeup can also be found here:

    Learning Classifier Systems: A Survey

    There is also this mucho-bucko paywalled article in the IEEE Computational Intelligence: A Survey of Learning Classifier Systems in Games which I'm currently reading, indeed I hadn't encountered the framework of the LCS until I stumbled upon said paper (even though I have a copy of Reinforcement Learning: An Introduction, printed 1998, somewhere. Getting old.)

    The earlier-talked-about ALPHA air combat NPC also fits into this tradition, it just uses a fuzzy inference system (not sure what that is exactly, it involved another set of equations with sigmas, pis and indexed and/or hatted variables) to do its thing.

  10. Adam 1

    give it a real difficult problem...

    ... like trying to create a user account in Windows 10 without syncing with the mothership

    1. theblackhand

      Re: give it a real difficult problem...

      That's not so much AI as brute force.

      Although wire cutters allow for less force and a little more finesse...

  11. James 51

    Sounds a lot like how the first version of the sims worked.

  12. AIBailey
    FAIL

    So much wrong with this.

    Other than the background at the top of the article, those screenshots look to be from the Atari VCS version of the game - which is far, far removed from the arcade version.

    Also, a quick google would suggest that the current human score for Ms PacMan was 933,580, set back in 2006 - http://www.twingalaxies.com/scores.php?scores=1386

    So basically, what the article is really saying (errors withstanding), is that after a lot of hard work, Microsoft have produced something AI-ish that can set a slightly higher score on a dodgy home-conversion of Ms PacMan, when compared to a human playing the genuine arcade version over 10 years ago.

    Doesn't quite have the same ring to it though.

    1. J. Cook Silver badge
      Boffin

      Re: So much wrong with this.

      Correct! It was, in fact, the atari VCS (aka 2600) system and not the arcade version.

      If it was the arcade version, there'd be at least one boffin wanting to know how the difficulty switches were set on the unit.

    2. Anonymous Coward
      Anonymous Coward

      Re: So much wrong with this.

      Somewhere around here I have a book with instructions on beating Pac-Man perfectly. Every time. Three different patterns, in fact. (ISTR that two were called "Bezo's Breaker" and "The Donut Dazzler".) I'm pretty sure the book is older than I am.

      So a machine can be hard-coded to play perfectly a game that a human can play almost perfectly; the computer doesn't get sick of it or sneeze at an inopportune moment and thus make a mistake. That's the big news?

  13. James Howat

    How things learn

    If your expectation is that the agent "learns how to play Ms. Pac-man", then yes, it's misleading.

    But if the expectation of the exercise is that the agent "learns how to play Ms. Pac-man well", then I think it's more-or-less accurate.

    No human player of the game would start playing without seeing the back of the box telling you the premise of the game, what you're supposed to avoid, and what gives you bonus points. We wouldn't turn to a human and say, "you're not a good chess player, you didn't work out how a bishop moves all by yourself!"

    1. Anonymous Coward
      Anonymous Coward

      Re: How things learn

      People learned how to play by watching someone else play it..... If the "AI" could do that I'd be more impressed.

    2. DropBear
      WTF?

      Re: How things learn

      "No human player of the game would start playing without seeing the back of the box"

      You're so full of it it's not even funny, I just can't decide whether you're doing it on purpose. Seriously?!? Is that a joke...? Because I'm not laughing.

      All my childhood I've played any game I could get my hands on never having seen a manual let alone any box - I just started expecting a default-ish control key map and looked at what I had on the screen. If it was vaguely car-shaped and the game had "rally" in its title, I tried driving it, until it went "boom" and I learned that maybe I don't want to touch vaguely bush-like things on the road side (or were they rocks? Why would I care? They were boom-things, end of story).

      Same for black spots on the road - they seem to be oil since they make me lose control = really bad, avoid. And if something seemed to shoot at me I definitely tried to a) dodge b) shoot back. If there was a place I could never reach, I tried to jump. If I didn't know what the jump key was, I kept mashing every button I had until my sprite jumped. If I could walk right up to it but not pass, I looked for a something that I could "collect" and tried again. If I had a horizon in front of me and the game's name implied flight in some way but nothing happened when I pressed "standard up", I kept mashing keys until something finally revved up - then I tried "up" again...

      Where the hell you get the idea that playing any game involves "instructions" first, by necessity, I simply cannot fathom...

  14. Fading
    Terminator

    meh

    I'd be more impressed if it was able to get that score whilst being hassled by the chippy shop owner to get off that machine and get out as he's closing up soon.......

    1. Chunes

      Re: meh

      Better still, give it a few beers, which is how I used to play Pac-Man in the pubs after our band packed up.

  15. Sil

    Not sure there is a problem, most systems at least predefine the original weights, which may change dynamically, since you can spare a lot of time this way.

    Also I was under the impression that the overwhelming majority of such systems were given the rules of the game they were trying to beat, not discover them. Wasn't it the case with Deep Blue and Alpha Go ?

    Or did they really learn everything the way you describe Space Invader learning ?

  16. John Smith 19 Gold badge
    Unhappy

    A suggestion to tell wheather something is actually "learning"

    Not only would its scores improve with time but also the rate at which its scores improve should increase.

    The first means it has learned what to do.

    The second that it has learned what to do which is important.

    This demos seems to show the first (gets better) but not that it's gotten better by "working out" what's important (because it's not adjusting it's own weighings. They've been hard coded in).

    Now " AI through language processing," sounds quite interesting, depending on exactly what they mean by the phrase.

    Actually trying to make sense of a sentence by y'know reading it (rather than running it against upteen million other sentences) has seen limited interest for some time now. Looking at again might prove useful.

    Not so much for playing Pacman. Or even for Ms Pacman

    1. DropBear

      Re: A suggestion to tell wheather something is actually "learning"

      "Actually trying to make sense of a sentence by y'know reading it (rather than running it against upteen million other sentences) has seen limited interest for some time now. Looking at again might prove useful."

      Unlikely. The information in a sentence is not actually in the letters or words but in the immense amount of personal (direct or indirect) experience conjured up by them when you read them, and the specific structure they're arranged into. "I rode a bicycle with no hands today" has no meaning to someone who has never rode a bicycle, or doesn't at least have extensive indirect knowledge of what riding one entails. The words are just the index key into a (hopefully shared) giant database.

      Granted, it is possible to _explain_ the same thing to someone without any of that, but that assumes the missing knowledge is a somewhat isolated hole in the net, not the entire net missing, which makes this a bit of a chicken-and-egg problem. You can't really explain much anything to someone has no common experience base or at least a common language (most of which they already understand) with you. You could point at yourself and say "Tarzan", but even that assumes they already understand what pointing is, what names are, and that you're a living, conscious being just like they are. Even then, doing the same with "me" might be problematic once they grin, point at you, and repeat "me"...

      So yeah, if you somehow got the impression that I believe we won't see any real AI until machines equipped with some majorly serious potential to store data and make connections get to _interactively_ experience our reality on their own - well, you would be right.

      1. Baldrickk

        Re: A suggestion to tell wheather something is actually "learning"

        "I rode a bicycle with no hands today" Not that I have ever seen a bicycle with hands before. Why would a bicycle need hands?

      2. John Smith 19 Gold badge
        Unhappy

        ""I rode a bicycle with no hands today" has no meaning to someone who has never rode a bicycle"

        Actually it has no meaning to anyone.

        1)You mean you learned to ride a bicycle and you have no hands? That's amazing.

        2) You learned to ride a bicycle while keeping your hands away from the handle bars? That's quite clever.

        People who do computational linguistics spend lots of time puzzling over this stuff. I like to think of it as the "decideability" problem and IMHO a lot of the time the "correct" parse of a sentence should be "ambiguous." IOW "Fu**ed if I know what you're talking about."

        Either maintain probabilities for which meaning is correct or identify what facts you need to find out (or already have) about user "DropBear" to establish which meaning is (probably) true. Disambiguation. Not just for wiki pages.

        Caveat. I'm not in the AI business. You could say I'm just artificially intelligent about AI.

        1. Robert Forsyth

          Re: ""I rode a bicycle with no hands today" has no meaning to someone who has never rode a bicycle"

          You miss the point

          I have the experience of riding a bike, like many other people, using that experience as context makes the sentence (even badly formed) unambiguous.

          You could argue that the story involved in most contexts is like loose rules.

          You enter a train carriage and ask:

          "Is this seat taken?" or

          "Is this seat free?"

        2. DropBear

          Re: ""I rode a bicycle with no hands today" has no meaning to someone who has never rode a bicycle"

          Okay, fine, I'm not the best at picking sentences that are colloquially used yet have little meaning for the layman. Much fun to be had, have at it. The point I was trying to make though wasn't about ambiguity - which is a major issue on its own, but as noted it's not like humans are exempt either - but about how words have no "meaning" on their own unless the reader already has some reference regarding the subject, and how other words can only be used to bridge the gap if it's an isolated one. When you have no reference grounded in experience for any word, you can't bootstrap your way in by "explaining" or "describing" anything.

          It's hard enough to start communicating with someone who doesn't speak your language at all, but at least you can still count on a huge body of presumably shared experiences with that person by sheer virtue of them also being a Homo Sapiens, presumably with many years of experiencing "being alive" under their belt. What I'm trying to propose is that the same task is basically impossible with a machine that lacks specifically that. All the grammatical wizardry means nought when there is ultimately nothing to attach all the pretty parsed verbs, nouns and all the rest even if you sorted out which qualifies which one.

          Even further, I'm proposing that any attempt to communicate with a machine, whether by language, pictures, five musical tones or interpretive dance is pointless until we create one that "experiences" our world in some meaningful way (no, I don't think spidering our web is enough - it needs to be able to interact with the world) and manages to develop some sort of externally observable consciousness / awareness transcending what we observe with animals.

          Specifically, I don't think we can arrive anywhere near the same result by simply throwing more code (or anything on the level of our current "neural networks" for that matter) at neatly arranged letters, expecting a machine to suddenly start making truly meaningful determinations based on them, which is what I think should be the yardstick AI is measured against. Concluding that "please", "buy" and "toilet paper" close together probably means we want it to do some shopping on Amazon is not what I'm talking about. We already have that, and I think it pretty much got as close to "AI" as it ever likely will. Even more specifically, I don't think it's possible to create an apparently intelligent "conversation machine" in a box, then optionally attach a body to it if we so desire - it's the other way around, a body is a mandatory first step, and language comes much, much later. If we want something with any actual intelligence, we'll need something that was born / lives out here...

  17. Kaltern

    Come back when they can make a self-learning program, with basic rules.

    1. This is your character. It can move up, down, left, right. It can't go through walls. It can be killed by ghosts, unless you eat a power pill, and only while they flash.

    2. You must eat all the dots, and not be killed.

    That's it. The programmers can only code those specific things, and the means to move itself.

    I'll be impressed when it can reach 999,999 with no further interaction.

  18. Prst. V.Jeltz Silver badge

    Its actually some way off beating a human

    To quote Guiness:

    Billy Mitchell (USA) scored the first “perfect” PAC-MAN game (3,333,360 points) on 3 July 1999, four more players have matched it. The top players now consider it a greater accomplishment to achieve the perfect game in the fastest time: Top 5 “Perfect” pac-man Rankings: Chris Ayra (USA) 3:42:04 16/2/2000 Rick Fothergill (Can) 3:58:42 31/7/1999 Tim Baldarramos (USA) 4:45:15 8/8/2004 Donald Hayes (USA) 5:24:46 21/7/2005 Billy Mitchell (USA) 5:30:00 3/7/1999 GWR Video Gamer's Edition 2008.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like