Communication, consciousness, and belief strength measures
post by Jakub Smékal (jakub-smekal) · 2024-02-17T05:45:42.834Z · LW · GW · 0 commentsContents
No comments
Communication is a fascinating subject. It’s a way of transferring information from one place to the next (or, to invert a quote by Chris Fields, communication is physics). Now you may have noticed that I used the word “communication” in two slightly different ways. In the first sentence you might have thought I’ll be talking about the way people communicate between each other, and in the next one I seemed to have abstracted away this base definition and applied it to a more abstract concept, physics (abstract in a very limited sense though). The funny thing is, even though I’m using up way more space to explain the first two sentences I wrote, there is still ambiguity in what I’m saying, there’s still information that’s being lost and transformed from me thinking it up, to writing it in particular words, to you reading them and interpreting them in your unique way. While the physics side of communication is certainly fascinating, I’ll just refer to this course for now and consider communication between people, and just ponder for a bit about the indirect costs of the loss of information in our daily communication (that kind of rhymes doesn’t it?). TL;DR*: I ended up talking about “consciousness”.*
There are a lot of angles to approach this topic from, but I’ll focus on one that seems more and more prevalent in both academic literature and daily conversation, namely the use of the word “consciousness” (and we’ll try to vaguely generalize from there). Consciousness comes up more and more as every new AI model seems to do more of the things we thought biological systems were unique at doing (#isSORAconscious?). What I always find thought-provoking is that a lot of people quite easily use the word “consciousness” in their work, and not only in random posts online, but also in academic papers, and I’m always left wondering: what do they mean by “consciousness”? I’m pretty sure they don’t mean the dictionary definition that pops up with a Google search: "the state of being awake and aware of one's surroundings”, because it often seems to be related to not only awareness of one’s surroundings, but also an awareness of the self, a quality that we perceive ourselves to perceive and that (some of us) assume other biological beings to also exhibit (e.g. looking in the mirror test with dolphins). But even this rather vague definition doesn’t seem to fully capture what a lot of people mean under the term consciousness. Or maybe it does, but then there’s the secondary problem that there exist a wide range of prior beliefs constraining what the term may or may not encapsulate. For instance, when taking a look at the camps in the Is or can AI be conscious? debate, it’s clear that some people claim Yes, it’s conscious (which begs the question what definition of “consciousness” is being employed), or No, AI isn’t conscious, right now, but may be in the future, or No, AI isn’t conscious and it can never be conscious, because X. The problem goes deeper, of course, because we can’t fully disentangle the statements from the people making them, especially if there are other implications connected to your belief in this debate, e.g. moral implications (to what extent is consciousness related to morality? Or even more generally [to give the active inference angle], to what extent will my belief in X change my actions [i.e. if I believe X is “conscious”, will I plan my actions in a particular way?]?).
Another question is about the preference for the type of communication. We are all kind of operating on the assumption that our shared language embedding is more or less the same (which it probably is, otherwise we couldn’t get pretty much anything done except for non-verbal communication). But maybe there’s a correlation between the abstractness of a concept and the variance of the distribution of all possible mappings to simpler concepts. To give an example elaborating on that last statement, love is a pretty abstract concept, but we all ground it in simpler, more tangible things like family, friends, romantic partners, etc., but those are fundamentally local to our individual experiences (though there’s of course commonality between our experiences, but there will also be multi-scale effects like culture, history, etc. that will introduce more variety). Would a debate about whether AI can experience love be of a similar kind as that of whether it is conscious?
Another thought (now moving slightly away from the above), related to the media we use for communication. There appears to be a bias to improve the speed of our communication, but not necessarily the quality of it. To some extent, these two measures are probably related, though if we take the evolution of technology for communication as an example, it’s pretty reasonable to classify it as an effort to improve the latency of human communication more than anything else (that is to say that the internet explicitly allows us to communicate much faster than by sending post cards, but the content we send to each other is still the same language [though language itself evolves, of course, and probably faster with faster communication]). Faster communication can also probably lead to local minima. We might even classify the whole consciousness debate as a local minimum; at least looking at the distribution of occurrences of the word consciousness, I reckon it’s skewed towards instances where a particular definition was not proposed or widely adopted (epistemic status: speculative). I wonder how much time spent blocked at a particular definition could be saved by adding a value between 0 and 1 indicating the level of belief of a statement in relation to standard definitions of words (or assumed share embeddings of words). As an example, I could write There appears to be a bias to improve the speed of our communication, but not necessarily the quality of it. [0.65] which would mean I hold that belief at approximately 65% strength. While I’m not confident this method will be adopted [0.33], it could be a pretty fun way to quantify how much you’re abstracting away the “meaning” of words in a particular context.
0 comments
Comments sorted by top scores.