Thursday, May 17, 2007

Singularity

He-he. :) Usually I found most of what Michael Anissimov wrote insightful and interesting. But this post about the Singularity is utter nonsense. Uncofortable with its unpredictability he tried to make a  case that human values could survive through it. :)

If I were the first superintelligence, I can tell you what I would do. I would rationally evaluate every aspect of my personality, improving it using Pareto optimisation and stronger forms of optimisation. I would develop and install powerful decision making subroutines that would ensure every decision I make is as close to optimal as possible. I would use these subroutines to evolve rapidly.

There is no single human value that I expect to keep. Books are of no use to superintelligence. Love and sex are something many of us already want to get rid of, a dark vestige of our evolutionary past. Games are inferior forms of Monte-Carlo simulations and evolutionary algorithms. Work will no longer be needed, thought alone would accomplish everything I might ever need. And the food I would need would be measured in ergs, not in calories...

The Singularity is not unpredictable. One thing we can certainly be sure of - change. Lots and lots of it. That's what you can expect after the Singularity.

3 comments:

Michael Anissimov said...

Perhaps a qualifier is necessary. OBVIOUSLY I believe that the Singularity could cause tremendous change. The point of my post is that IF the first superintelligence allows humans to survive and doesn't immediately force them to change radically along with it, there's absolutely nothing wrong with an extremely advanced superintelligent sector existing alongside a traditional human sector. Believe me, I can imagine radical change on such an immense scale that most would be shocked. My point is just that in a successful Singularity, not everyone would necessarily would be *forced* to change in every way. Does that make sense?

Danila Medvedev said...

That is extremely misleading, because it ignores the rate of thinking.

Your point is essentially this: "for some time after superintelligence is created life may go on as usual, making everyone happy and comfortable".

This is true at the superintelligence's rate of thinking. For a couple of subjective centuries (for the superintelligence - astronomical hours) not much will change. But for MOSH humans the period of time where everything is as usual will be almost instantaneous.

I don't believe that humans will eventually have any say in what happens to them. So even though it is entirely possible that some Mister X will not notice anything unusual (because he is uploaded into a comfortable VR), it is misleading to describe it as "not forcing humans to change". Just like taking an ant from an anthill and placing it into a lab environment (or onto a spaceship) is radical change for an ant, whether he realises/feels that or not.

Lion Kimbro said...

Pareto rationality towards *what,* Danila?

Rationality is always towards some end.

The super-intelligence must have an end, something that it would want.

Human desires are interests are not a burden, they are the purpose.

It is my belief that as people mature, the more they come to realize that what they want is for the near universal flowering of humanity and high virtues, artificial Gods.

I don't understand why a super intelligence would not only be able to understand the wisdom of Shakespeare, Mark Twain, Madeline Le'Engle, and myriad other writers, and see and understand and value the promise of all life.

I mean, look-- Optimus Prime. :)

If all you worship is Megatron, who argues that Optimus Prime wastes his strength on the weak-- well, ...

It is clear which sides of this struggle we would be on.

Myself, I'll pick Xavier over Magneto.