How to respond to the recent condemnations of the rationalist community

post by Christopher King (christopher-king) · 2023-04-04T01:42:49.225Z · LW · GW · 7 comments

Contents

  AI safetyists and AI ethicists are aligned
  How to respond?
  Concluding thoughts
None
7 comments

On Twitter, there has been some concerning discourse about rationalism and AI. I'd rather not link to it directly, but just look up the term "TESCREAL" on Twitter. The R stands for rationalism. Here is a particularly strong tweet:

We need a support group for those of us who've been around the #TESCREAL bundle for years, watching them amass more power, start their various institutes, get all sorts of $$ & mainstream media megaphones, see what their various cults have been doing & just watching them expand.

From what I can tell, the "contra-TESCREAL" (as I am coining it) crowd do not seem interested in object level arguments about AI existential safety at this time. They are analyzing the situation purely from a social perspective.

So, what do we do? Do we get mad at them? What is the rational thing to do?

I think the answer is understanding and kindness.

AI safetyists and AI ethicists are aligned

The trigger seems to have been the difference between AI safety and AI ethics. However, I'd like to argue we are quite aligned each other, as the vast majority of human groups ultimately are.

So I think it's fair to say that we have a similar utility function; AI safetyists and AI ethicists just come into conflict because they have different beliefs about how likely the singularity is. In the specific case of contra-TESCREAL, they also don't understand rationalist conversational norms. But I fear most of us don't understand their conversational norms either.

This is why I am proposing the prefix "contra-" instead of "anti-"; we are not enemies, we are in disagreement.[1]

How to respond?

I feel like most rationalists first instinct is to respond like a straw vulcan [? · GW]. But like I said, they are not interested in object level arguments. Rationality is whatever causes winning, but what is winning in this scenario?

I think the teachings of Jesus apply well:

When they which were about him saw what would follow, they said unto him, Lord, shall we smite with the sword? And one of them smote the servant of the high priest, and cut off his right ear. And Jesus answered and said, Suffer ye thus far. And he touched his ear, and healed him.

How contra-TESCREAL feel about us is part of their map. Their map is not the territory [? · GW], but it is part of the territory.

I decided to write the following email to Timnit Gebru:

Dear Timnit Gebru,

Although I am more familiar with AI existential safety, I do value your work and your voice. In particular, I found the paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" insightful. Recognizing that humans have agency with respect to the creation of AI is important, and illuminating the decision making processes behind them even more so. And analyzing the way in which AI can, through sheer usefulness alone, command large amounts of electricity demonstrates the dangers of AI. I am saddened by those in my community who have treated you harshly, but I just wanted to let you know, even when I don't agree with everything you say, I value you and think your research is valuable. To say else wise would be irrational.

Sincerely,
Christopher King

And a similar one to Emily Bender.

Dear Emily Bender,

Although I am more familiar with AI existential safety, I do value your work and your voice. In particular, I found the editorial "Look behind the curtain: Don’t be dazzled by claims of ‘artificial intelligence’" insightful. Although I don't agree with everything in it, I am glad that there are people who recognize and promote the fact that what LLMs do is entirely alien and likely has little relation to the human brain. People often assume that the LLM will do what is right because it is useful, which is an entirely inappropriate antrophormization.

I am saddened by those in my community who have treated you harshly, but I just wanted to let you know, even when I don't agree with everything you say, I value you and think your research is valuable. To say else wise would be irrational.
 

Sincerely,
Christopher King

I invite all of you to send similar emails and messages to members of the contra-TESCREAL group. This is particularly important because I fear that there are members of our community sending harsh and condemning messages. Even if it is not their intention to cause problems (they just think they are presenting arguments), it needs to be balanced out.

Concluding thoughts

  1. ^

    Although I must admit there was some temptation to call them WaTESCREAL [LW · GW].

7 comments

Comments sorted by top scores.

comment by edith · 2023-05-05T08:26:57.041Z · LW(p) · GW(p)

Agree, roughly, that AI safety and AI ethics positions are broadly aligned and that greater cooperation between the two would be beneficial to all, but it's worth anticipating how the prescriptions here could backfire. A paragraph such as:

I am saddened by those in my community who have treated you harshly, but I just wanted to let you know, even when I don't agree with everything you say, I value you and think your research is valuable. To say else wise would be irrational.

could itself easily be perceived, by someone who is predisposed to suspicion toward rationalists, as backhanded, overly familiar, and/or condescending, regardless of its actual intentions. As well, the language around the concluding suggestion toward joining AI safety and bias groups may very well be seen by a suspicious reader as a plan of action for infiltration of such groups, no matter what level of transparency is actually prescribed.

That said, I'll offer that any worthwhile cooperation with ethics/bias groups (as, like other commenters pointed out, some in those circles simply won't bother to engage in good faith) is unlikely to come from demonstrations of personal friendliness, but from demonstrations of willingness to take aligned action. The comment that safety/risk people could have a lot to learn from bias/risk people and should do so seems pretty sound. On that, some areas I think plenty of rationalists should (and probably do) have concern for:

  • The usage of current and under-development AI systems for surveillance and undermining of privacy rights.

  • The development of predictive policing, which in addition to privacy concerns, poses the problems of false positives, overzealousness due to false confidence, and discriminatory usage (getting at similar things here to Yudkowsky in the linked article on police reform).

  • The production and dissemination of misinformation, which could be used by malicious actors to stoke panic and destabilise sectors of society for outside political gain. (Shiri's Scissor, anyone?)

This is not to say that anyone invested in safety/x-risk should deprioritise in favour of these issues. Rather, I believe that, as discussed, there is substantial alignment on these issues between safety and ethics groups already, that there is a lot of benefit to gain from greater cooperation between safety/risk actors and ethics/bias actors, and that a strategy of publicly pursuing research and action on these issues would result in an overall net gain of utility for all parties, including a good chance of reducing/mitigating multiple different kinds of AI risks.

comment by 25Hour (aaron-kaufman) · 2023-04-04T02:16:36.563Z · LW(p) · GW(p)

This is a tiny corner of the internet (Timnit Gebru and friends) and probably not worth engaging with, since they consider themselves diametrically opposed to techies/rationalists/etc and will not engage with them in good faith.  They are also probably a single-digit number of people, albeit a group really good at getting under techies' skin.

Replies from: christopher-king, lahwran, HarrisonDurland
comment by Christopher King (christopher-king) · 2023-04-04T16:28:27.427Z · LW(p) · GW(p)

I'm mostly thinking about cost-benefit. I think even a tiny effort towards expressing empathy would have a moderately beneficial effect, even if only for the people we're showing empathy towards.

Replies from: aaron-kaufman, Closed Limelike Curves
comment by 25Hour (aaron-kaufman) · 2023-04-04T22:19:58.528Z · LW(p) · GW(p)

It's a beautiful dream, but I dunno, man.  Have you ever seen Timnit engage charitably and in-good-faith with anyone she's ever disagreed publicly with?

And absent such charity and good faith, what good could come of any interaction whatsoever?

comment by Closed Limelike Curves · 2023-04-04T20:30:19.406Z · LW(p) · GW(p)

Really? I think a tiny bit of effort will do exactly nothing, or at best further entrench their beliefs ("See? Even the rationalists think we have valid points!"). The best response is just to ignore them, like most trolls.

comment by the gears to ascension (lahwran) · 2023-04-04T07:19:34.592Z · LW(p) · GW(p)

I'd further add - they appropriate the language of anti-appropriation, but are not themselves skilled at recognizing the seeking of equity in social systems. They seem socially disoriented by a threat they see, in a similar way to how I see yudkowsky crashing communicatively due to a threat. It doesn't surprise me to see them upset at yudkowsky; both they and yudkowsky strike me as instantiating the waluigi of their own resistance to a thing as partly containing the thing they are afraid of. The things they claim to care about are things worth caring about, but I cannot endorse their strategy. Care for workers, but some of the elements of their acronym very much do intend to prioritize that, and it's possible to simply ignore them and just keep on doing the right thing. Nobody can make you be a good person, and if someone is trying to, the only thing you can do is let their emotive words pass over you and their thoughtful words act as a claim about their own perspective.

Like yudkowsky, their perspectives on the threat are useful. But there's no need for either to dismiss the other, in my view - they see the same threat and feel the other side can't see it. Just keep trying to make the world better and it'll solve both their problems.

So - anyone have any ideas for how to drastically improve the memetic resistance to confusion attacks of all beings, computer or chemical, and strengthen and broaden caring between circles of concern?

comment by HarrisonDurland · 2024-03-20T03:33:19.383Z · LW(p) · GW(p)

This is a tiny corner of the internet (Timnit Gebru and friends) and probably not worth engaging with

In hindsight, this seems quite obviously wrong, and efforts to extend more olive branches seems like it would have obviously been better—even if only to legibly demonstrate that safetyists attempted to play nice.