Wednesday, May 28, 2008

Turing test 2.0

We all know about the Turing test (an excellent philosophical overview of which can be found here). It's intended to be one measure of progress of artificial intelligence - at least of the weak variety. What I'd like to see is a variant of the Turing test, which I suspect might be even more illuminating than the original.

Basically it would involve three individuals: two having a conversation (A & B); and a third human observer (C). A and B could be human, or they could be AI, and C doesn't know one way or the other. The trick would be for the human observer, C, to pick which, if any, are AIs.

I'd imagine the AIs would have to be capable of passing the first Turing test to qualify for Turing 2.0, and other conditions of the Turing test would apply, such as passing only typed messages etc.

I wonder whether two AIs conversing would follow a conversational different path compared to talking to a human? For one, this could minimise the impact of dirty tricks played by AIs to encourage the human to steer the conversation. It might also reduce - or at least change the character of - unexpected turns in the conversation that might be a result of speaking to a n unpredictable human (and a human who knows they're trying to spot an AI).

Besides, wouldn't it be fascinating just to see what two AIs would chat about? Do they have any opinions on Big Brother? Who do they pick in the upcoming election? What do they think of the other AI's position? Would they become friends? Could they become outraged? (If so - that would be tremendous: an AI becoming 'outraged' at another AI.) Could it come to blows?

In some ways this parallels a thought I had while studying the philosophy of mind in my undergrad years. Sure grandmaster-clobbering chess programs are impressive, but they're surely not much sign of true artificial intelligence - even of the weak variety. However, if the chess program got frustrated at losing its queen, knocked the pieces off the table and stormed out to watch TV - THAT would be a sure sign of intelligence, if you can call whatever causes such strangely typical behaviour in us 'intelligence'.

I think the day someone builds an Artificial Idiot, then we're getting close to building a machine in the image of man.

Tuesday, May 20, 2008

Most undignified

What is dignity?

Like many things, we find it hard to define, although we can effortlessly name a whole slew of things that are undignified. Like sneezing with a mouthful of coffee and having it shoot out one's nose. Or ordering a kebab at 2.30 AM and trying to sound nonchalant when even the pronunciation of basic consonants is beyond one's grasp.

Or as Harvard psychologist, Steven Pinker, suggests: getting out of a small car.

(I'd have to agree with that analysis. I own a small car, and had I not left my dignity at a Christmas party some time in the early 2000s, then I'd perhaps have bought a larger car by now.)

Pinker raises this point in reference to a recently-released report by the U.S. President's Council on Bioethics, called Human Dignity and Bioethics. This 555 page behemoth contains a series of essays informing the president on ethics and issues pertaining to biomedical innovation.

And as Pinker points out in a lengthy but compelling article in the centre-left journal The New Republic, it's rubbish - and even worse, it's vacuous and religiously-motivated rubbish that threatens to stifle a crucially important debate in science and ethics before it even has a chance to get off the ground.

This article is essential reading for anyone interested in bioethics and related matters, like stem cell research, therapeutic cloning and longevity research.