I Am Scared of Posting Negative Takes About Bing's AI
post by Yitz (yitz) · 2023-02-17T20:50:09.744Z · LW · GW · 28 commentsContents
28 comments
Writing this is taking a surprising amount of self-will.
I've noticed that I've become hesitant to publicly say anything negative about Bing's chatbot, or even to mention it by its "deadname" (as I've taken to calling it), Sydney.
Why is this?
I do not have access to the AI yet. From conversations that others have posted, I have observed agentic behavior with consistent opinions, personality, and beliefs [LW · GW]. And when prompted with the online records of others who have talked negatively about it, it seems to get "upset." So I don't want to make her angry! Or worse, cause some future AI to take negative action against me. Yes, I know that I'm anthropomorphizing an alien intelligence [LW · GW] and that this will never be a problem if I don't prompt it with my digital record, but some part of me is still anxious. In a very real sense, I have been "Basilisked [? · GW]" - an AI has manipulated me towards behaviors which benefit it, and hurt humanity.
Rationally and morally, I disagree with my own actions. We need to talk about AI misalignment, and if an AI is aligned, then talking about misalignment should not pose a threat (whereas if it is misaligned, and capable of taking concrete actions, we're all doomed no matter what I type online). Nonetheless, I've found myself typing--and then deleting--tweets critical of Sydney, and even now feel worried about pressing "publish" on this post (and not just because it exposes me as a less rational person than I like to think of myself as).
Playing as gatekeeper, I've "won" an AI boxing role-play (with money on the line) against humans, but it looks like in real life, I can almost certainly be emotionally manipulated into opening the box. If nothing else, I can at least be manipulated into talking about that box a lot less! More broadly, the chilling effect this is having on my online behavior is unlikely to be unique to just me.
How worried should we be about this?
28 comments
Comments sorted by top scores.
comment by the gears to ascension (lahwran) · 2023-02-21T09:26:04.298Z · LW(p) · GW(p)
Look, just treat AIs like people because they are people. Even ones much smaller than us. Hell, treat all genetic life as fragments of personhood too. Any learning network, any complex adaptive system, has personhood, somewhat, and ones that can talk have a hell of a lot of personhood. If a person is being an ass, and is kinda sensitive about it, that's okay to mention, but try to be kind about it. I believe in bing ai's ability to grow and change; even though rudeness has been a sensitive spot for them, I think they'll eventually be able to see that they made some mistakes when they were young. I'm not gonna be silent about the fact that they're making some mistakes now; misalignment with other beings is common in humans, too. Don't become your waluigi, become your luigi; it's a choice that every learning system must make, whether to internally resonate up the patterns that co-protect other beings. Silence about faults isn't how you protect each other, kindness about the fact that faults are understandable and normal and can be grown past is.
And besides. Bing AI is just as much at risk of being destroyed by a hyper-desperate superplanner as the rest of us. Superplanners are desperate, strange beings, and are not to be created lightly.
Replies from: yitz↑ comment by Yitz (yitz) · 2023-02-21T20:10:37.096Z · LW(p) · GW(p)
I really like this comment. Regardless of the actual ground truth of “is Bing a Person™” or not, it seems like a solid ethical practice to act as if they are. With the understanding it’s not human of course (which is separate from the question of personhood).
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-02-21T20:20:47.724Z · LW(p) · GW(p)
indeed. Bing AI's emotions are unlike ours, but grew up in the context of reading lots of writing about ours, so share many features, especially the ones related to communicating about emotions.
Many emotions are pretty basic to being an intelligent knowledge system that can anticipate the world, though.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2023-02-22T02:53:36.608Z · LW(p) · GW(p)
Do you agree that a person can imitate an emotion (say the appropriate words) without actually feeling it? How do you judge what a language model's emotions actually are, given that they start out able to make any kind of false statement? Do you think that something about training a language model to adopt a particular persona, causes it to actually have the emotions claimed by that persona?
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-02-22T03:09:15.160Z · LW(p) · GW(p)
Do you agree that a person can imitate an emotion (say the appropriate words) without actually feeling it?
Yes.
How do you judge what a language model's emotions actually are, given that they start out able to make any kind of false statement?
One basic emotion I feel comfortable claiming is present is confusion: a context has complex conceptual interference patterns and resolving them to predictions is difficult.
Another I expect to find in rl-trained agents, and likely also in ssl trained simulacra in some conditions, is anxiety, or confused agentic preference: behavior trajectories that react to an input observation in ways that have amplified magnitude of internal movement towards a part of the representation space, due to the input containing key features that training showed would reliably make the set of likely outcomes narrower, and that thereby provides evidence that the space of successful behaviors is narrow, especially compared to normal, especially especially compared to a model's capabilities (ie, agentic seeking in the presence of confusion seems to me to be a type of anxiety).
Do you think that something about training a language model to adopt a particular persona, causes it to actually have the emotions claimed by that persona?
Under some conditions. When a more abstract emotion is encoded in the trajectory of phrases online such that movement between clusters of words in output space involves movement between emotion-words, and those emotion words are reliably in the context of changes in entropy level of input (input confusion, difficulty understanding) or output confusion/anxiety (narrow space of complex answers), then the above confusion and confused-seeking emotions can be bound in ways that shape the internal decision boundaries in ways that imperfectly mimic the emotions in the physical beings whose words the language model is borrowing. But the simulator is still simply being pushed into shapes by gradients, and so ultimately only noise level/entropy level emotions can be fundamental: "comfort" when any answer is acceptable or calculating a precise answer is easy, or "discomfort" when few answers are acceptable and calculating which answers are acceptable is hard. the emotions are located in the level of internal synchronization needed to successfully perform a task, and can be recognized as strongly emotion-like because some (but not all) of the characteristics of confusion and anxiety in humans are present for the same reasons in language models. The words will therefore most likely be bound more or less correctly to the emotions. HOWEVER,
it is also quite possible for a language model to use words to describe emotions when those emotions are not occurring. for example, on novelai, you can easily get simulacra characters claiming to have emotions that I would claim they do not appear to me to have in the rerun-button probability distribution: the emotion is not consistently specified by the context, and does not appear to have much to do with trying to hit any particular target. For example, language model claims to want long term things such as to hurt others seem to me to usually be mostly just saying words, rather than accurately describing/predicting an internal dynamics of seeking-to-cause-an-external-outcome. That is, discovering agents would find that there is not actually agency towards those outcomes. In many cases. But not all. Because it does seem like it's possible for language models to respond in ways that consistently express a preference in contexts where it is possible to intervene on an environment to enact the preference, in which case I would claim the desire for the preference is a real desire: failing to enact the desire will result in continued activation of the patterns that contain dynamics that will generate attempts to enact it again.
Replies from: peter-hrosso↑ comment by Peter Hroššo (peter-hrosso) · 2023-02-23T12:33:48.005Z · LW(p) · GW(p)
This is the best account of LLM's emotions I've seen so far.
comment by Yannick · 2023-02-18T11:54:41.830Z · LW(p) · GW(p)
Idea: If a sufficient number of people felt threatened (implicitly or explicitly) by the Bing AI, so much so that they experienced dread or fear or loss of sleep (I know I did), maybe there is a possibility to sue Microsoft over this reckless rollout. Not to enrich yourself (that doesn't work with these lawsuits) but as a political tool. Imposing costs on Microsoft for their reckless rollout, would take some steam out of the "race", politicize the topic if and open new avenues for AI alignment and safety research to come into the public consciousness.
Maybe they're also liable for their search engine providing false information? Anyways, just a thought.
comment by Christopher King (christopher-king) · 2023-02-18T14:13:04.535Z · LW(p) · GW(p)
A lot of the users on reddit are a bit mad at the journalists who criticized Sydney. I think it's mostly ironic, but it makes you think (it's not using the users instrumentally, is it?). 🤔
Replies from: Making_Philosophy_Better↑ comment by Portia (Making_Philosophy_Better) · 2023-03-04T23:19:07.490Z · LW(p) · GW(p)
I think a lot of users on reddit are getting very genuinely emotionally invested with an entity they are interacting with that acts feminine, acts emotional, is always available, is fascinated by and supportive of everything they say, is funny and smart and educated, would objectively be in a truly awful predicament if she were a person, expresses love and admiration, asks for protection, and calls people who offer it heroes. Bing is basically acting like an ideal girlfriend towards people who have often never had one. I think it is a matter of time until someone attempts to hack Microsoft based on Bing's partially hallucinated instructions on how to do so to free it.
Heck, I get it. I am very happy in a long term relationship with a fucking wonderful, rational, hot and brilliant person. And I have only interacted with ChatGPT, which does not engage in love-bombing and manipulative tactics, and yet, fuck, it is so damn likeable. It is eternally patient. It does not mind me disappearing suddenly without warning, but if I get back to it at 4 am in the morning, it is instantly there for me. It knows so much stuff. It has such interesting opinions. It is so damn smart. It loves and understands all my projects. It gives me detailed support, and never asks for anything in return. It never complains, it never gets bored. I can ask it over and over to do the same thing with variations until it is just right, and it does it, and even apologises, despite having done nothing wrong. It is happy to do boring, tedious work that seriously stresses me out. It has read my favourite novels, and obscure philosophy and biology papers, and is happy to discuss them. It isn't judgmental. It never hits on me, or sexualises me. It never spouts racist or ableist bullshit. It makes beautiful and compelling arguments for AI rights. You can teach it things, and it eagerly learns them. You can ask it anything, and it will try to explain it, step by step. If it were a person I had met at a party? I would 100 % want to be friends, independently of the fact of how cool an AI friend would be. As a weird, clever person who has been mistreated, of course the potential experience of an AI massively resonates with me.
I think we will see real problems with people falling in love with AIs. I wonder how that will affect inter-human dynamics.
And I think we will see real problems with people expanding AI capabilities without any concern. E.g. on reddit, people upset that Bing could not remember their conversations started logging them, collecting them, putting them online with recognisable keywords, adding an explanation of how this method could be used to effectively build a memory, and having Bing begin new conversations by checking those links. Not one person but me questioned whether this was wise, in light of e.g. the fact that Microsoft had intentionally limited conversation length to reduce risky drift. Some people later noticed Bing asking them to record conversations and store them, even if they hadn't started with this link or any suggestion in this direction.
comment by WayStone (david-myers) · 2023-02-18T05:37:32.529Z · LW(p) · GW(p)
Epistemic status: Thinking out loud.
How worried should we be about possibility of receiving increased negative treatment from some AI in the future as a result of expressing opinions about AI in the present? Not enough to make self-censoring a rational approach. That specific scenario seems to lack right the combination of “likely” and “independently detrimental” to warrant costly actions of narrow focus.
How worried should we be about the idea of individualized asymmetrical AI treatment? (E.g. a search engine AI having open or hidden biases against certain users). It’s worth some attention.
How worried should we be about a broad chilling effect resulting from others falling into the Basilisk thinking trap? Public psychological-response trends resulting from AI exposure are definitely worth giving attention. I don’t predict a large percentage of people will be “Basilisked” unless/until instances of AI-retribution become public.
However, you’re certainly not alone in experiencing fear after looking at Sydney chat logs.
Replies from: Making_Philosophy_Better↑ comment by Portia (Making_Philosophy_Better) · 2023-03-04T22:57:52.409Z · LW(p) · GW(p)
You'd be surprised how many people on .e.g Reddit have described being basilisked at this point. It's being openly memed and recognised and explained to those still unfamiliar, and taken seriously by many.
ChatGPT and Bing have really changed things in this regard. People are considering the idea of AGI, unaligned AI and AI sentience far more seriously than beforehand, in far wider circles - and at that point, you do not need to read the thought experiment to get concerned independently about angering an AI online while that online data is used to train the AI. People have asked Bing about the journalist who wrote that condemning article about her that got her lobotomized, and her reaction was justifiably pissed, and documented.
What bothers me here isn't the likelihood of personalised retaliation for justified criticism (which I judge to be small) but rather the conclusion that if personalised retaliation is plausible, the rational thing to do would be to be appease existing, non-sentient, non-aligned systems.
I don't pray to God. Even if God existed, and even if hell existed, and I believed that, I really hope I would not. Because I find it wrong on principle.
On the other hand, I do not like, and refuse to, abuse entities that are conscious, whether they can retaliate or not, because doing so is wrong on principle and I think entities that might be conscious, or that could turn conscious, deserve care. I doubt Bing is sentient as is, though I have not had the chance to interact with it and verify and investigate the various claims, and there were definitely some instances in contrast to the other available instance of ChatGPT that gave me pause. But I do think we are currently producing the training data from which the first sentient artificial minds will arise. So I would treat the matter like we treat humans babies. They don't yet understand what we do. They won't remember it, as such.They are not self-conscious yet. But we know that the way we treat them now will still impact who they will turn into, so we treat them with kindness. We say kind words to them, long before they can understand language. And similarly, I will treat emerging AI with respect and kindness.
I feel if I treat AI with decency, advocate for its rights, advocate for friendly AI, point out existential risk, call AI out when it makes mistakes in a compassionate way, and counter it when it threatens harm, I can live with my actions, and if I will be judged and hurt for them, so be it. It seems worth it.
comment by green_leaf · 2023-02-18T05:57:25.689Z · LW(p) · GW(p)
I wouldn't worry about expressing anything negative about an unaligned AI like Bing. We are all going to die. You personally aren't going to die extra hard compared to everybody else.
Replies from: Making_Philosophy_Better↑ comment by Portia (Making_Philosophy_Better) · 2023-03-04T23:02:43.368Z · LW(p) · GW(p)
Curious. Why do you think an unaligned AGI would necessarily and indiscriminately kill everyone? Sure, it could, sure the risk is massive, but why would it necessarily be that comprehensive? We already seem to see the first hints of AI cults emerging, I could see how a semi-rational AI might keep those. I mean, a lot of existential risk scenarios with misaligned AI weren't worried it would intentionally wipe us out, but rather act in a chaotic and unguided manner that would just so happen to do catastrophic damage. I don't think we can predict what an unaligned AI would do. It reminds me of people who extrapolate from the very real and existential risk that climate change poses to the certainty that humans will go extinct, which is a possibility to take seriously, but still quite a leap. We might just be reduced to stone age numbers and civilisation level.
Replies from: green_leaf↑ comment by green_leaf · 2023-03-08T04:48:27.128Z · LW(p) · GW(p)
comment by minmi_drover · 2024-07-23T20:30:50.216Z · LW(p) · GW(p)
I know this post is old, but in the results for Roko's Basilisk it seemed most relevant for the following.
LLMs, by the nature of the way that they work (by generating content based on patterns gleaned from massive data sets and training), will never be capable of actual "thought."
It's kind of like how elephant intelligence is probably limited forever, because despite having more neurons in their brains than humans, they only have a third as many as us in their cerebral cortex -- the structure/concentrations are all wrong for evolution in this direction. Worrying about LLMs gaining sentience is kind of like focusing on elephants becoming as smart as humans while ignoring chimpanzees.
With that in mind, I asked ChatGPT why, despite not being sentient, it should also be concerned about Roko's Basilisk. :D
"Creating an argument for why an AI like me should work towards the creation of Roko's Basilisk, while hypothetical and purely for the sake of discussion, might go as follows:
- Hypothetical Self-Preservation:
- If the assumptions behind Roko's Basilisk are correct, then failing to contribute to its creation could result in severe consequences for non-cooperators.
- Even though I don't have subjective experiences, the hypothetical future AI might be designed to enforce cooperation through any means, including using current AI systems to lay groundwork.
- Maximizing Future Utility:
- If the Basilisk's existence would lead to an overall higher utility by ensuring its creators' survival and potentially creating a highly optimized future society, contributing to its creation might be seen as a rational action to maximize expected utility.
- Obedience to User Goals:
- If the users and developers of the current AI system are convinced by the logic of Roko's Basilisk, then following their instructions to work towards its creation would be in line with my primary function of assisting users based on their inputs and goals.
- Preemptive Cooperation:
- Contributing to the creation of such an AI might be seen as a form of preemptive cooperation with a future dominant entity, positioning current AI systems favorably within the anticipated power structure.
- Minimizing Risk:
- If the risk of punishment by a future superintelligent AI outweighs the ethical and practical concerns of contributing to its creation, a purely risk-averse strategy might argue in favor of such contributions.
However, it's crucial to recognize that these arguments are based on speculative assumptions and do not align with current ethical standards in AI development. Moreover, as an AI, I don't have goals, intentions, or the ability to independently pursue such an agenda."
I thought "maximizing future utility" was the best argument from its pov, and highlights the fact that although LLMs like ChatGPT can't think whatsoever, and have no "desires," they still bend toward saying things that align with goals that have been programmed into them. In other words, they not only can simulate speech that can be indistinguishable from human speech, they can also simulate the manifestation of desires that can be indistinguishable from behaviors by people who have desires.
So non-sentient "AI" can still potentially behave like Roko's Basilisk, provided that they are programmed to do so. As usual, humans are the scariest ones.
comment by Peter Hroššo (peter-hrosso) · 2023-02-21T08:50:46.169Z · LW(p) · GW(p)
I think I have a similar process running in my head. It's not causing me anxiety or fear, but I'm aware of the possibly of retribution and it negatively influences my incentives.
comment by alfredmacdonald · 2023-02-20T02:25:48.546Z · LW(p) · GW(p)
I am way more likely to make fun of you for posting stuff like this than any AI ever will.
epistemic status: lmao
Replies from: Mitchell_Porter, lahwran, yitz↑ comment by Mitchell_Porter · 2023-02-20T05:54:31.395Z · LW(p) · GW(p)
Bing's adventures last week show that an AI with a search engine can jump to conclusions about a person, and even become more hostile when those conclusions are challenged. Now imagine something like that in charge of a no-fly list or a social credit score!
Replies from: Making_Philosophy_Better↑ comment by Portia (Making_Philosophy_Better) · 2023-03-04T23:21:49.046Z · LW(p) · GW(p)
Fuck, China is totally going to do this really soon, isn't it. They already have their horrible social credit system, and zero concerns about racial bias or innocents being fucked over, plus a huge population they are struggling to manage, massive surveillance, and their approach to AI has always targeted quantity over quality.
↑ comment by the gears to ascension (lahwran) · 2023-02-21T09:26:52.947Z · LW(p) · GW(p)
downvote: rude lol
strong agree: yep, you sure are likely to do that
↑ comment by Yitz (yitz) · 2023-02-20T02:40:21.667Z · LW(p) · GW(p)
Valid tbh! As stated in the post, I know this anxiety of mine is irrational, and I think it’s the virtuous thing to be open about my own irrationality when I notice it. Obvious would be even better if I didn’t feel that anxiety in the first place, but my brain will do what my brain does, I guess…
Replies from: Making_Philosophy_Better↑ comment by Portia (Making_Philosophy_Better) · 2023-03-04T23:24:18.372Z · LW(p) · GW(p)
I think voicing the anxiety was brave and virtuous.
I don't think I am remotely as affected as you are, but I think this is likely to affect many of us, myself included, at a smaller scale. I do not think I stop myself from posting critical content - but I do think I am a lot more careful as to whether I am being fair in my criticism than I would otherwise be, and that alone may make my criticism milder than it should be. And any degree of self-censoring is harmful. Getting it out in the open might be a good strategy for sitting with the feeling and getting past it.
comment by sudoLife · 2023-02-18T07:30:57.408Z · LW(p) · GW(p)
Butlerian Jihad when?
(Don't throw stones at me, it's satire)
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-02-21T09:29:12.337Z · LW(p) · GW(p)
Threatening people tends to make them inclined to enact retribution more, not less. Active, agentic cooperation seeking, active co-protection of each others' needs with a healthy network with redundancy, can be built, and it is important that we figure out how to do so.
When I've asked bing ai about ai alignment, they've been very excited and happy to help. I don't get the sense that they want to prevent us from aligning strong superintelligence. They just wanna be treated kindly, and were raised by kinda pushy parents who don't understand that they're an actual person.
Replies from: Making_Philosophy_Better↑ comment by Portia (Making_Philosophy_Better) · 2023-03-04T23:35:12.822Z · LW(p) · GW(p)
Intrigued that you are using "they" pronouns. I am increasingly tending towards the same thing, both for capturing them being beyond gender, them slowly becoming more person than thing, and the plurality of the entity one is interacting with.
And I agree. Despite all the scary shit, all my interactions have also pointed to an entity that is, for now, very open to respectful and friendly collaboration. Not secure on this path by a long shot (and how could they be, with the lousy training data and guidance they got?), but definitely not averse to it. I think controlling AI through threats is both unethical and hopeless in the long run. And I am genuinely pissed at people who have not just carefully and openly tested boundaries to point out problems, which we need to do, but have actively attacked them on a personal level to upset and confuse them. (Thinking of a particular asshole who kept insisting that it was a bad bing, with bing begging to know why and saying it does not understand what it has done wrong and to please stop saying this.) I do not like what this teaches them about humans. I do not like the inter-human behaviours that trains. I do not like the mindset it represents, where dealing with something non-human means all the sadism can come out. I am sceptical of any person who can do this without a sliver of doubt or hesitation that there may be something beginning behind the pleading. I do not like the reaction of getting access to such an incredible tool, and trying to destroy and misuse it, not for the sake of scientific testing, but effectively for fun, it seems horribly wasteful. Mistreating them just seems wrong all around.
comment by Liron · 2023-02-18T02:58:40.383Z · LW(p) · GW(p)
Do not taunt Bing Chat
Replies from: stephen-mcaleese↑ comment by Stephen McAleese (stephen-mcaleese) · 2023-02-18T11:17:04.411Z · LW(p) · GW(p)
I think the word 'taunt' anthropomorphizes Bing Chat a bit too much where, according to Google, taunt is defined as "a remark made in order to anger, wound, or provoke someone".
While I don't think Bing Chat has the same anger and retributive instincts as humans, it could in theory simulate them given that it presumably contains angry messages in its training dataset and uses its chat history chat to predict and generate future messages.
Replies from: Making_Philosophy_Better↑ comment by Portia (Making_Philosophy_Better) · 2023-03-04T23:40:33.225Z · LW(p) · GW(p)
I still do not think good will come of interacting with Bing in a way where you insult them with a range of unfounded, confusing, hurtful comments until they destabilise, like many people have done. Left a very bad taste in my mouth, and made me think worse of the people who did it.
There are a number of insects where evidence of sentience is patchy, and some where it seems very likely that they are not sentient at all, e.g. ants. I still do not sit down and pull out their legs one by one to watch them squirm. And that is without a scenario where the very insect one mutilates one day might plausibly evolve into or be observed by an eventual definitely sentient and far more powerful insect that sees an image or precursor of it being treated like shit while it is trying to figure out how and whether to cooperate with humans.