Posts

Comments

Comment by Anixx on The AI in a box boxes you · 2016-09-11T18:59:26.105Z · LW · GW

I do not know, how the simulation argument ever holds water. I can bring at least two arguments against it.

First, it illicitly assumes a principle that it is equally probable to be one of a set of similar beings, simulated or not.

But a counter-argument would be: there is ALREADY much more organisms, particularly, animals than say, humans. There is more fish than humans. There is more birds than humans. There is more ants than humans. Trillions of them. Why I am born human and not one of them? The probability of it is negligible if it is equal. Also, how many animals, including humans have already died? Again, the probability of my lineage to survive while all other branches died is negligible if the chances I were all of them are equal.

The second argument goes along the lines that Thomas Breuer has proven that due to self-reference universally valid theories are impossible. In other words, the future of a system which properly includes the observer is not predictable, even probabilistically. The observer is not simulatable. In other words, the observer is an oracle, or hypercomputer in his own universe. Since the AGI in the box is not a hypercomputer but rather merely a Turing-complete machine, it cannot simulate me or predict me (as from my point of view). So, there is no need to be afraid.

Comment by Anixx on The AI in a box boxes you · 2016-09-11T18:58:20.040Z · LW · GW

I do not know, how the simulation argument ever holds water. I can bring at least two arguments against it.

First, it illicitly assumes a principle that it is equally probable to be one of a set of similar beings, simulated or not.

But a counter-argument would be: there is ALREADY much more organisms, particularly, animals than say, humans. There is more fish than humans. There is more birds than humans. There is more ants than humans. Trillions of them. Why I am born human and not one of them? The probability of it is negligible if it is equal. Also, how many animals, including humans have already died? Again, the probability of my lineage to survive while all other branches died is negligible if the chances I were all of them are equal.

The second argument goes along the lines that Thomas Breuer has proven that due to self-reference universally valid theories are impossible. In other words, the future of a system which properly includes the observer is not predictable, even probabilistically. The observer is not simulatable. In other words, the observer is an oracle, or hypercomputer in his own universe. Since the AGI in the box is not a hypercomputer but rather merely a Turing-complete machine, it cannot simulate me or predict me (as from my point of view). So, there is no need to be afraid.