Posts

Comments

Comment by green_leaf on A Problem With Patternism · 2020-06-14T14:09:13.597Z · LW · GW

I can think of two consequences of patternism - firstly, that consciousness doesn't depend on any specific substance, only on the pattern. This is very important when judging the consciousness of mind uploads, AIs, robots or aliens.

Secondly, if we're the pattern, we survive mind upload, which seems very important too.

Comment by green_leaf on A Problem With Patternism · 2020-05-20T22:24:40.149Z · LW · GW
I fear that there is no real way to do this objectively and since there will always be small mutations/errors in copying you can never know which of the other "you's" is the most like you.

You're writing that you can never know which pattern is objectively most like you, because there is no objective way of comparing patterns, and this is a problem for patternism.

But you don't need to have an objective way of comparing patterns in order to be a pattern, so this isn't a problem for patternism after all.

Comment by green_leaf on Clarifying "AI Alignment" · 2019-06-05T14:32:10.278Z · LW · GW
If the difference is mostly between "what H wants" and "what H truly/normatively values", then this is just a communication difficulty. For me adding "truly" or "normatively" to "values" is just emphasis and doesn't change the meaning.

So "wants" means a want more general than an object-level desire (like wanting to buy oranges), and it already takes into account the possibility of H changing his mind about what he wants if H discovers that his wants contradict his normative values?

If that's right, how is this generalization defined? (E.g. The CEV was "what H wants in the limit of infinite intelligence, reasoning time and complete information".)

Comment by green_leaf on Clarifying "AI Alignment" · 2019-04-07T22:04:52.124Z · LW · GW

Are there any plans to generalize this kind of alignment later to include CEV or some other plausible metaethics, or should this be "the final stop"?