Friday, June 27, 2008

The Singularity isn't near

One day we'll all be enslaved/liberated by superintelligent machines and/or uploaded/downloaded into vast computer matrices and freed from the evils of death. At least, that's according to proponents of the coming Singularity.

If you believe the Singularity is near or not, there's a wealth of fascinating material to read over at the IEEE Spectrum Online. For the faithful, it's a wellspring of hope that the rapture might be near. But for sceptics (like myself), it's still full of meaty chunks that should be considered, if only to spark articulations of why it's all fantasy.

Some highlights include Signs of the Singularity by Vernor Vinge, mathematician, computer scientist and sci-fi author. He helpfully signposts the way to the Singularity to ensure we don't miss it when it comes. Wouldn't that be embarrassing?

And Vinge is boundlessly optimistic about the whole project
I think it's likely that with technology we can in the fairly near future create or become creatures of more than human intelligence. Such a technological singularity would revolutionize our world, ushering in a posthuman epoch.
Now, I don't doubt our ability to expand human capacity, but it seems a bit of a leap to start talking about post-humans already. Especially when we don't really know what post-humans will be.

It all smacks to me of runaway utopianism based around the unexamined (and I think fallacious) assumption that progress = good. There's even swathes of text written by Singularitists that overtly suggest that evolution is a progression towards something better. From Vinge's essay:
there are a couple of trends that at least raise the possibility of the technological singularity. The first is a very long-term trend, namely Life's tendency, across aeons, toward greater complexity. Some people see this as unstoppable progress toward betterment.
This explicitly equates complexity with betterment. Yet wouldn't we be coy when a supervirus - one of the most simple of all organisms - wipes us out? Or should I say 'betters' us?

Another notion raised willy nilly by Singularitists is that intelligence is the answer to all our problems. Take this example from Robin Hanson, author of The Economics of the Singularity.

One of the pillars of the modern singularity hypothesis in its many forms is that intelligence is a general elixir, able to cure many if not all economic ailments. Typically, this belief is expressed in the form of an argument that the arrival of very intelligent machines will produce the next singularity.
I wonder what seemingly intractable problems we are faced with today that could be trivially solved by the brute force application of more intelligence? Take the Israel/Palestine conflict. Would a superintelligence yield some startling insight into the conflict that will cause both sides to sit back, shake their heads at their decades of folly, kicking themselves that they hadn't seen the solution before, then shake hands and make up?

I find that unlikely.

In fact, I suspect there is already more than sufficient intelligence to suggest an answer to even as intractable a conundrum as the Israel/Palestine conflict: both sides need to make concessions and sacrifices, both materially and symbolically, to the other side. But that ain't gonna happen, even if Deep Thought thinks it should. That's because there's more than intelligence at stake - there's emotion, pride, greed, ideology, outrage etc. And intelligence is a drop in ocean compared to their potency.

That is, of course, assuming the superintelligence doesn't just damn it all to hell and take over:
A few even imagine innovations so unprecedentedly potent that a single machine embodying the first innovation could go through the entire innovation series by itself, unnoticed, within a week, and then take over the world.
Sounds suspiciously like the old benevolent dictator argument to me. And sure a benevolent dictator would be great, but the problem is not the calibre of the leader, it's that there are no checks and balances, which are arguably more important that a good leader.

Thankfully the Singularity essays are not all by the faithful. In fact a fascinating one is written by American science journalist John Horgan about the challenges in producing a conscious machine - at least one based on our vast ignorance of the human brain.
Specialists in real rather than artificial brains find such bionic convergence scenarios naive, often laughably so.

Indeed, the more you learn about brains, the more you may wonder how the damn things work. And in fact, sometimes they don't. They succumb to schizophrenia, bipolar disorder, depression, Alzheimer's disease, and many other disorders that resist explanation and treatment.
The second point here is especially pertinent. If we're trying to emulate human intelligence, we're more likely to produce a nimrod like Corey Worthington than an Einstein. Would we really want an AI that would rather watch Big Brother and wonders whether its cooling units look fat in this colour?

And, more seriously, are we prepared for an AI that suffers from some psychopathology?

Just another reason I think the Singularitists are pushing a bridge too far with their AI speculations. We have a long way to go before marginal intelligence is possible in an AI, let alone superintelligence.

Hardware capable of superintelligence is not a sufficient condition for superintelligent AI, despite what Kurzweil might think.

Horgan sums up my sentiments towards the whole thing nicely:
Let's face it. The singularity is a religious rather than a scientific vision. The science-fiction writer Ken MacLeod has dubbed it “the rapture for nerds,” an allusion to the end-time, when Jesus whisks the faithful to heaven and leaves us sinners behind.
The escape from death seems such a powerful motivator for Singularity thinking, that I'm inclined to agree it's something to fill the void for the superintelligencia.

2 Comments:

At 12:12 am , Anonymous Anonymous said...

Your comments re the benevolent dictator may be true but I don't think it's necessarily something we'll get to vote on so to speak. It seems to me that the singularity is an inevitable conclusion, whether it comes as quickly as some predict (20 years or 2045 or whatever) or later, so the moral debate of whether it's a 'good' thing or not may not be entirely relevant.

To me the whole concept of the singularity and of exponential technological growth (genetics/nanotech/AI) is intriguing because it forces me to ponder concepts that by definition we are not 'intelligent enough' to understand (yet?) - doesn't stop the pondering though.

The Israel/Palestine conflict is an example of that inability for us to imagine a world that is that radically different to our own. Perhaps many questions about religion would be answered with the spread of superintelligence. Religous ideas that have only been around for a couple of thousand years may be irrelevant to this new intelligence (whether augmented human intelligence or AI). Or perhaps 'sharing' of physical resources can be achieved in this new world in a way we can't comprehend.

Speculating on how something like this would be achieved (world peace/economic abundance etc) is fruitless because if we could do that now we would already be there.. still, it's mental masturbation at it's best! And if there is any chance of any of it actually happening, it would probably start with someone thinking about it, even if 99% get it wrong..

 
At 12:58 pm , Anonymous Anonymous said...

You're too limited in envisioning potential solutons to the israeli/palestinian conflict. Along with the singularity and infinite intelligence comes, effectively, infinite money. You can just buy everyone off. At the very least, you can provide access to abundant cheap water and remove one of the underlying root problems.

 

Post a Comment

Subscribe to Post Comments [Atom]

<< Home