Posts

A tool for searching rationalist & EA webs 2023-09-29T15:23:33.255Z
A new place to discuss cognitive science, ethics and human alignment 2022-11-04T14:34:15.632Z
Survey: What (de)motivates you about AI risk? 2022-08-03T19:17:35.822Z

Comments

Comment by Daniel_Friedrich (Hominid Dan) on Consciousness as a conflationary alliance term for intrinsically valued internal experiences · 2023-09-15T19:48:11.742Z · LW · GW

People who study consciousness would likely tell you that it feels like the precise opposite - memetic rivalry/parasitism? That's because they talk about consciousness in a very specific sense - they mean qualia (see Erik Hoel's answer to your post). For some people, internalizing what they mean is extremely hard and I don't blame them - my impression is that many illusionists have something like aphantasia applied to the metacognition about consciousness. They are right to be suspicious about something that feels ineffable and irreducible, however that's the thing with many fundamental concepts as Rami has greatly pointed out.

However, I like your sample! My guess is that most "normal" people's intuitive definitions would be close to 1) lifeforce, free will or soul; or 2) thinking, responding to stimuli. The first category is in rivalry because it creates the esoteric connotation, which gives consciousness a stigma within academia. The second is in rivalry because people think they know what you mean & you usually need to start the conversation by vigorously convincing them they may not. :)

In contrast, I can actually imagine that for someone, the experience of reading any of your definitions can highlight the feeling of "I am conscious" that philosophers mean, although many of them confuse this feeling with the cognitive realization that this feeling exists or the cognitive process that creates it, making them not work as definitions.

Comment by Daniel_Friedrich (Hominid Dan) on A new place to discuss cognitive science, ethics and human alignment · 2022-11-04T16:28:02.492Z · LW · GW

Thanks for the response, lot of fun prompts!

Most importantly, I believe there is shouldness to values, particularly, it sounds like a good defining feature of moral values - even though it might be an illusion we get to decide them freely (but that seems besides the point).

I don't think it's clear we don't get to edit our terminal values. I might be egoistic at core and yet I could decide to undergo an operation that would make me a pure utilitarian. It might be signalling or a computational mistake on my part but I could. It also could be that the brain's algorithm can update the model of what it optimizes. For instance, it could be the behavioral algorithms we choose are evaluated based on a model of "good life" which has a different representation of morality depending on what we're influenced by, which is what "choice" means if free will is an illusion.

 Why would we ever want to lock in values?

  1. In the case of AGI - because there's a strong case that the values an AGI develops as a default are misaligned with what we - and potential future people (would) care about. And because some values likely will get locked in via AGI, it's just a question of which.
  2. For the same reason the Founding Fathers wrote the constitution. The level to which something is locked in is a spectrum. Will MacAskill essentially suggests "locking in" the value of epistemic & moral humility, a principle that is supposed to update with more evidence.
  3. Therefore, the question to what extent we want to lock in our present values is a big part of answering the question of which values should we lock in.