Posts
Comments
I suspect that the paradigm of computation one chooses plays an important role here. The paradigm of a deterministic Turing machine leads to what I described in the post - one dimensional sequences and guaranteed solipsism. The paradigm of a a nondeterministic Turing machine allows for multi-dimensional sequences. I will edit the post to reflect on this.
Solomonoff indiction doesn’t say anything about larger world models that contain the one-dimensional sequences that form the Solomonoff distribution. You appear to be saying that although the predicted sequence is always solipsistic from the point of view of the inductor, there can be a larger reality that contains that sequence, but that is an extra add-on that doesn’t appear anywhere in the original Solomonoff induction.
Real men wear pink shirts that say "REAL MEN WEAR PINK".
Realer men wear pink shirts that don’t say anything.
Even realer men wear break the spell and wear what they actually like.
And the realest men of them all wear fedoras.
Interesting. What is the difference then between illusionism and eliminativism? Is eliminativism the even more "hard-core" position, whereby while illusionism only denies the existence of phenomenal properties, but not experience, eliminativism denies the existence of any experience altogether?
The referent of the label 2+2 is strictly identical to the referent of the label 4, but the labels 2+2 and 4 themselves are obviously not identical.
What would the statement of illusionism be then? That 🟩 is an illusion? Surely, yes, but digging deeper, you would get to some form of brain activity.
It is my impression that certain people think that illusionists deny that there is any 🟩 even in the map, and I have never heard any illusionist make that argument (maybe I just haven’t been paying enough attention though). The conversation seems to be getting stuck somewhere at the level of misunderstandings concerning labels and referents. The key insight that I am trying to communicate here is that when we say that A is B, we generally do not mean that A is strictly identical to B - which it clearly isn’t. This applies even when we say things like 2+2 = 4. Obviously, "2+2" and "4" are not even close to being identical. Everyone understands this, and everyone understands that when we say that 2+2 = 4, we use two different sets of symbols to refer to one single mathematical object.
Claiming that greenness is activity in the visual cortex does not amount to denying that there is 🟩.
But again, perhaps I just misunderstand illusionism (although not even Keith Frankish himself would deny that there is 🟩, see the video linked in the post). Are there any illusionists around here who are claiming that 🟩 is not?
As a side note, perhaps I will stop using the verb "to exist" altogether, and instead start using "to be".
In that case, perhaps I could leave it as an exercise for the reader to deduce what was there originally. Maybe it could be a good intelligence test for GPT-4...
More seriously, the reason I am reluctant to use the heart is that a heart shape is usually mentally associated with all kinds of things that are entirely irrelevant to this discussion, and could generate confusion. When writing the post, I chose the green square in a deliberate manner as the shape least likely to be distracting.
Of course, if most readers see a placeholder box, it will generate even more confusion, so... To edit or not to edit, that is the question.
Of course numbers exist. Here is one.
So I will just leave you with this: 🟩
I am beginning to think that I should not have used the quotes at all. I used them more or less as a highlighting tool, perhaps a different font style would accomplish this better. I might edit the post. Regarding existence, I am using the verb "to exist" as synonymous with the verb "to be". And it is my impression that illusionists generally do not deny that (not going to use any quotes this time) 🟩 is. If some here do, I would be interested in hearing their arguments.
Yes, we would be even worse off if we randomly pulled out a superintelligent optimizer out of the space of all possible optimizers. That would, with almost absolute certainty, cause swift human extinction. The current techniques are somewhat better than taking a completely random shot in the dark. However, especially given point No.2, that can be of only very little comfort to us.
All optimizers have at least one utility function. At any given moment in time, an optimizer is behaving in accordance with some utility function. It might not be explicitly representing this utility function, it might not even be aware of the concept of utility functions at all - but at the end of the day, it is behaving in a certain way as opposed to another. It is moving the world towards a particular state, as opposed to another, and there is some utility function that has an optimum in precisely that state. In principle, any object at all can be modeled as having a utility function, even a rock.
Naturally, an optimizer can have not just one, but multiple utility functions. That makes the problem even worse, because then, all of those utility functions need to be aligned.
I took it as self evident that a superintelligent optimizer with a utility function the optimum of which does not contain any humans would put the universe in a state which does not contain any humans. Hence, if such an optimizer is developed, the entire human population will end and there will be no second chances.
One point deserving of being stressed is that this hypothetical super-optimizer would be more incentivized to exterminate humanity in particular than to exterminate (almost) any other existing structure occupying the same volume of space. In other words, the reason that we occupy space that could be re-structured to the target configuration defined by the optimum of the utility function of the super-optimizer, and the reason that we both vitally require, and are ourselves made of resources that are instrumentally valuable (There is hydrogen in the oceans that could be fused. There is an atmosphere that could be burned. There is energy in our very bodies that could be harvested.) are not the only reasons why the super-optimizer would want to kill us. There is another important reason, namely avoiding competition. After all, we would have already demonstrated ourselves to be capable of creating an artificial super-optimizer, and we thus probably could, if allowed to, create another - with a different utility function. The already existing super-optimizer would have very little reason to take that risk.