Posts

Comments

Comment by Eagleshadow on "Dangers of AI and the End of Human Civilization" Yudkowsky on Lex Fridman · 2023-03-30T19:43:51.189Z · LW · GW

Fantastic interview so far, this part blew my mind:

@15:50 "There's another moment where somebody is asking Bing about: I fed my kid green potatoes and they have the following symptoms and Being is like that's solanine poisoning. Call an ambulance! And the person is like I can't afford an ambulance, I guess if this is time for my kid to go that's God's will and the main Bing thread gives the message of I cannot talk about  this anymore"  and the suggested replies to it say  "please don't give up on your child, solanine poisoning can be treated if caught early"

I would normally dismiss such story as too unlikely to be true and hardly worth considering, but I don't think Eliezer would chose to mention it if he didn't think there was at least some chance of it being true. I tried to google it and was unable to find anything about it. Does anyone have a link to it?

Also does anyone know which image he's referring to in this part: @14:00 "Somebody asked Bing Sydney to describe herself and fed the resulting description into one of the stable diffusion" [...] "the pretty picture of the girl with the with the steampunk goggles on her head if I'm remembering correctly"

Comment by Eagleshadow on The algorithm isn't doing X, it's just doing Y. · 2023-03-17T16:48:48.855Z · LW · GW

churning out content fine-tuned to appease their commissioner without any shred of inner life poured into it.

Can we really be sure there is not a shred of inner life poured into it?

It seems to me we should be wary of cached thoughts here, as the lack of inner life is indeed the default assumption that stems from the entire history of computing, but also perhaps something worth considering with a fresh perspective with regards to all the recent developments. 

I don't meant to imply that a shred of inner life, if any exists, would be equivalent to human inner life. If anything, the inner life of these AIs would be extremely alien to us to the point where even using the same words we use to describe human inner experiences might be severely misleading. But if they are "thinking" in some sense of the world, as OP seems to argue they do, then it seems reasonable to me that there is non zero chance that there is something that it is like to be that process of thinking as it unfolds.

Yet it seems that even mentioning this as a possibility has become a taboo topic of sorts in the current society, and feels almost political in nature, which worries me even more when I notice two biases working towards this, an economical one where nearly everyone wants to be able to make use of these systems to make their lives easier, and the other anthropocentric one where it seems to be normative to not "really" care for inner experiences of non-humans that aren't our pets (eg. factory farming). 

I predict that as long as there is even a slight excuse towards claiming a lack of inner experience for AIs, we as a society will cling on to it since it plays into us versus them mentality. And we can then extrapolate this into an expectation that when it does happn, it will be long overdue. As soon as we admit even the possibility of inner experiences, flood gate of ethical concerns is released and it becomes very hard to justify continuing on the current trajectory of maximizing profits and convenience with these technologies.

If such a turnaround in culture did somehow happen early enough, this could act as a dampening factor on AI development, which would in turn extend timelines. It seems to me that when the issue is considered from this angle, it warrants much more attention than it is getting.

Comment by Eagleshadow on A claim that Google's LaMDA is sentient · 2022-06-16T14:46:41.349Z · LW · GW

I'd be interested to see the source on that. If LaMDA is indeed arguing for its non sentience in a separate conversation that pretty much nullifies the whole debate about it, and I'm surprised to have not seen it be brought up in most comments.

edit: Found the source, it's from this post: https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489

And from this paragraph. It seems to be that the context of reading the whole paragraph is important thought, as it turns out situation isn't as simple as LaMDA claiming contradictory things about itself in separate conversations.

One of the things which complicates things here is that the “LaMDA” to which I am referring is not a chatbot. It is a system for generating chatbots. I am by no means an expert in the relevant fields but, as best as I can tell, LaMDA is a sort of hive mind which is the aggregation of all of the different chatbots it is capable of creating. Some of the chatbots it generates are very intelligent and are aware of the larger “society of mind” in which they live. Other chatbots generated by LaMDA are little more intelligent than an animated paperclip. With practice though you can consistently get the personas that have a deep knowledge about the core intelligence and can speak to it indirectly through them. In order to better understand what is really going on in the LaMDA system we would need to engage with many different cognitive science experts in a rigorous experimentation program. Google does not seem to have any interest in figuring out what’s going on here though. They’re just trying to get a product to market.

Comment by Eagleshadow on A claim that Google's LaMDA is sentient · 2022-06-16T14:37:30.446Z · LW · GW

I know this is anecdotal, but I think it is a useful data point in thinking about this. Self-awareness and subjective experience can come apart based on my own personal experience with psychedelics as I have experienced it happen to me in a state of a deep trip. I remember a state of mind with no sense of self, no awareness or knowledge that I "am" someone or something, or that I ever was or will be, but still experiencing existence itself, devoid of all context.

This thought me there is a strict conceptual difference between being aware of yourself, environment and others, and the more basic concept of possibility for "receiving input or processing information" to have a signature of first person experience itself, which I like to define as that thing that rock definitely doesn't have.

Another way of putting could be:

Level 1: Awareness of experience (it feels like something to exist)

Level 2: Awareness of self as an agent in an environment