Tricky
In an IM chat you'd probably pick out humans from their typos - and if you modified an AI to make human-like typos you'd be moving away from the true aim and into a pass-this-particular-test scenario.
An artificial intelligence software contest devised as an experiment by mathematician Alan Turing will be held this year in his old office at wartime code-breaking HQ Bletchley Park. The location was chosen to mark the centenary of his birth in 1912. During the Turing Test a computer program must use natural language and hold …
"I suspect any "AI" entered into this competition is engineered with a focus on this particular test anyway."
Yes, good point, but it would be a shame if any effort was directed at making responses less perfect than they might be just in order to win the prize.
I mean you could consider adding in possible responses along the lines of "Yeah, my mum always says that" or "That's not how I learned it at school" to try and fool the judges but it wouldn't advance the core technology at all.
"grand prize of $100,000 and a gold medal for the first computer whose responses are truly indistinguishable from a human's"
I'm curious to know how the threshold for winning this prize is determined. 'Judges' only rank entries from most-to-least 'humanlike', they make no absolute judgement on whether they believe an interlocuter is or isn't human. Also, the judges are a smallish subset, how many real people need to be convinced that a computer is human before it's considered to be "truly indistinguishable"?
People have been talking about machine intelligence for at least half a century. This usually includes statements to the order of "If a machine can do (such-and-so), then it's intelligent. And quite a few times, machines have done (such-and-so) competently, and the judges have simply moved the goalposts.
The question is not whether machines may become intelligent. The question is whether humans will admit it when they do.