Artificial intransigence

June 24th, 2004 by Ben Goldacre in bad science, bbc, dangers, nanniebots, new scientist | 2 Comments »

Ben Goldacre
Thursday June 24, 2004
The Guardian

· You may remember Jim Wightman. He claimed to have written a piece of chat software that could pass itself off as a real child in a chatroom, and identify internet paedophiles by behaviour. To say this was thought highly dubious is an understatement – the software, if it existed, would have been 10 years ahead of everything written by huge teams of AI academics; he offered to let us see the software working, and then refused; and the NSPCC and Barnardo’s distanced themselves from his ideas about monitoring children’s activities himself with no child protection background. Embarrassingly, New Scientist accepted his claims uncritically, and the BBC and others followed suit, although New Scientist did, after two pieces here, remove their glowing article about him from their website.

· Now they’re back with Wightman. Here’s what happened. New Scientist visited Jim at home with two AI academics to chat with the program. In previous “test conversations”, over the web, without experimenters being able to see that the computer was not connected to any others, the program gave highly sophisticated answers after a suspiciously long delay (almost as if someone was typing them). This time it instantly gave rubbish computer-generated responses, nothing like those in the previous transcripts. In fact, it gave the very same answers that Alice, an old and not very sophisticated AI program, written by somebody else, not Wightman, gave in subsequent tests. Then Wightman offered to show them the code … but suddenly, and inexplicably, the power to Jim’s whole house went off. The test was over. Imagine.

· Did New Scientist finally give it up? No. “New Scientist can still provide no definitive proof of Wightman’s claims, but looks forward to a return visit when the complete ChatNannies software is available for testing.” Please. Did they ask Wightman about his unlikely claim to have a seven-figure offer from an American corporation which had “full independent testing performed on the AI and are confident of its validity and effecacy[sic]“? He was apparently quite capable of giving them a proper demonstration. Did they quiz Wightman on his previous false claims about writing software, or any of the other issues Bad Science raised? No. To those of us brought up loving the great institution that is New Scientist it is, as Tibor Fischer said, a bit like bouncing out of the classroom at breaktime, only to catch your favourite uncle masturbating in the school playground.

Nanniebots and Neverland

April 1st, 2004 by Ben Goldacre in bad science, bbc, nanniebots, new scientist | 2 Comments »

Nanniebots and Neverland

Ben Goldacre
Thursday April 1, 2004
The Guardian

Talk about Bad Science here

· Right. Where were we? Ah yes, everyone was questioning the authenticity of Jim Wightman’s paedophile-entrapping artificial intelligence chat program Nanniebot, since it was more than 10 years ahead of all other artificial intelligence technology, and no one is allowed to see it in the flesh. But Jim – from the unfortunately named Neverland Systems – had personally guaranteed me a demonstration. Weirdly, Jim is now refusing to do so, although he is still claiming to have thousands of Nanniebots in action on the web. I’m certainly not going to waste your time with an in-depth philosophical analysis of his “chat transcripts” since no-one can be sure they were definitely generated by his program.

· Of course, the BBC, ITV and New Scientist couldn’t possibly have known that Jim was caught out making false claims about writing software a year ago (tinyurl.com/3gfxv), on the Holocaust denial newsgroups he likes to frequent. He now admits to making these false claims but said they were made in jest. He also got noticed in the Tivo hacking discussion boards, claiming to have modified the device to stream shows over a network; which the other experts felt was impossible (tinyurl.com/38wmx). Jim provided no evidence to make them think otherwise and disappeared. He still claims to have it working.

· People are perfectly entitled to spend time on Holocaust denial chatboards. Jim admits posting as Death’s Head, the same name as the SS murder and torture squad. Death’s Head has made postings containing violent and graphic threats to rape, assault, and kill, often with a firearm, in the context of chatboard discussions about the Holocaust.

In an online discussion after similar violent threats were mentioned a posting did state that “me = Jim Wightman = Death’s Head = Totenkopf… all you needed to do was ask.” (tinyurl.com/2jg3p). Jim denied to me that he made the postings and says they were faked. Maybe they were but Jim’s previous postings give reason to question his work. So far, he’s made a grand claim with no good evidence: business as usual for Bad Science.

This character is now collecting donations and volunteers for chatnannies.com, a service where adults will enter children’s chatrooms to monitor for paedophile activity. I’m quite sure he will be greatly assisted in this venture by the fact that he now cites, on his website, the uncritical reports of his claims about his work by New Scientist and the BBC.

‘Nanniebots’ to catch paedophiles

March 25th, 2004 by Ben Goldacre in bad science, nanniebots, new scientist | 3 Comments »

‘Nanniebots’ to catch paedophiles

Ben Goldacre
Thursday March 25, 2004
The Guardian

Talk bad science

· As I sit here, quietly shedding the weight off my fat arse in my Dr Norbert Wurgler caffeine-impregnated SlimFit tights, I find myself bitterly regretting the title of the column. Ok. So here’s one I’m not sure of. Artificial intelligence is being used to catch paedophiles in the form of “Nanniebots”. These are AI programs which hang out in internet chatrooms, allegedly spotting the signs of grooming. They have done “such a good job of passing themselves off as young people that they have proved indistinguishable from them,” according to New Scientist. That’s the Turing test – where a computer program is indistinguishable from a real person – almost passed then; and who’d have thought it, in a program written by a lone IT consultant from Wolverhampton with no AI background. So I call him.

· Here’s the problem. Reading New Scientist’s chat with Nanniebot at www.tinyurl.com/2y55h, the excellent www.ntk.net/ (Private Eye for geeks) points out that Nanniebot “seems to be able to make logical deductions, parse colloquial English, correctly choose the correct moment to scan a database of UK national holidays, comment on the relative qualities of the Robocop series, and divine the nature of pancakes and pancake day.” Jabberwock, the winner of last year’s Loebner prize for the Turing test, is rubbish in comparison: try talking to it at www.tinyurl.com/2osgo. But Jim Wightman, the Nanniebot inventor – whose site claims they’ve passed the Turing test – isn’t entering the Loebner prize this year: maybe next year … it’s too buggy. But it’s live on the internet already? Can I test it? Sure. But I want to see with my own eyes that there’s not a real human being connected somewhere tapping out the answers. Jim offers network monitoring software on my computer, to prove it’s connected to the one server. But what about that server? I want to see it working on it’s own without a human, too. Can I come round to Jim’s place? He chuckles … Jim doesn’t keep the conversation datasets on site in Wolverhampton. “I know it sounds a bit Mission Impossible but … ” He’s worried they might get stolen. They’re in a secure facility “with an iron lid under a mountain!” He has no copies. It’s 18 terabytes of data, to be fair. There are copies in the hosting facilities, one in London. I offer to go there. “There might be security issues with them letting us in … ” So here it is. I’m going if I can. I’d love to see it work. If there is an AI academic who wants to come, email me: it could be the biggest ever breakthrough in AI. Or it could be a lot of fun.