Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation

post by Rob Bensinger (RobbBB) · 2021-11-08T02:19:04.189Z · LW · GW · 97 comments

Contents

  1. (1:57:57) Early EA history and EA/Leverage interactions
  2. (2:13:22) Narrative addiction and EA leaders being unable to talk to each other
  3. (2:17:14) Early Geoff cooperativeness
  4. (2:20:00) Possible causes for EAs becoming more narrative-addicted
  5. (2:22:34) Conflict causing group insularity
  6. (2:24:51) Anna on narrative businesses, narrative pyramid schemes, and disagreements
  7. (2:27:51) Geoff on narratives, morale, and the epistemic sweet spot
  8. (2:30:08) Anna on trying to block out things that would weaken the narrative, and on external criticism of Leverage
  9. (2:34:24) More on early Geoff cooperativeness
  10. (2:41:45) "Stealing donors", Leverage's vibe, keeping weird things at arm's length, and "writing off" on philosophical grounds
  11. (2:49:42) The value of looking at historical details, and narrative addiction collapse
  12. (2:52:23) Geoff wants out of the rationality community; PR and associations; and disruptive narratives
None
97 comments

Geoff Anders of Leverage and Anna Salamon of CFAR had a conversation on Geoff's Twitch channel on October 23, triggered by Zoe Curzi's post [LW · GW] about having horrible experiences at Leverage.

Technical issues meant that the Twitch video was mostly lost, and the audio only resurfaced a few days ago, thanks to Lulie.

I thought the conversation was pretty important, so I'm (a) using this post the signal-boost the audio, for people who missed it; and (b) posting a transcript here, along with the (previously unavailable, and important for making sense of the audio) last two hours of chat log.

You can find the full audio here, and video of the first few minutes here. The full audio discusses a bunch of stuff about Leverage's history; the only part of the stream transcribed below is the Anna/Geoff conversation, which starts about two hours in.

The chat messages I have aren't timestamped, and sometimes consist of side-conversations that don't directly connect with what Geoff and Anna are discussing. So I've inserted blocks of chat log at what seemed like roughly the right parts of the conversation; but let me know if any are in the wrong place for comprehension, and I'll move them elsewhere.


1. (1:57:57) Early EA history and EA/Leverage interactions

[...]

Anna Salamon:
All right, let's talk.

Geoff Anders:
All right, all right. Yeah, let's get to it.

Anna Salamon:
I hear that the rationality and Leverage communities have some sort of history!

Geoff Anders:
That's right. Okay, so basically, Anna, as I mentioned to you when we chatted about this, I wanted to talk about in general Leverage history, and then there's a long history with the rationality community, and right now, it seems like there's... I'd like to understand what happened with the relations, basically.

Like, earlier on, things were substantially more congenial. More communication. There were various types of joint events. Ben, who suggested he would be called by first name, organized the first solstice for the rationality community. He did that while working at Leverage. That was totally fine. I volunteered at the CFAR workshops. Ben also did that. Then, EA Summit 2013, I thought was sort of a big deal. We had you guys sort of do a half-workshop during that. I thought that was great. Ben reminded me that we actually held a book launch for MIRI somehow. I remember the event very vaguely. Yeah, but it was at Luke Nosek's house, and I don't know. Maybe it was a joint... I don't remember, but there was a bunch of stuff.

Let's see. All right, yeah, so I just switched my audio. Ali, let me know if that's better or worse.

But anyway, so there was that, and then I feel like we just had lots of interests in common and so forth, but then somehow, things didn't end up working out as excellently as possible. So I was interested in just chatting about what happened, getting some common context on the history, maybe trying to...

 

sowgone: 'Our Final Invention', maybe

 

Geoff Anders:
Oh, yeah. Maybe Our Final... Yes! Yes! I think that was the book. At least, that sounds pretty likely.

Yeah, so basically looking a bit at the past, and also trying to understand where things are presently, and maybe figuring out whether there's anything constructive to be done. Like, in some circumstances it seems like trust is so low that I'd love to know what could make things better, basically.

So anyway, I just talked for a bit. Yeah, so let's talk about it.

Anna Salamon:
Yeah. Are we still trying to end at 1:00, in 20 minutes?

Geoff Anders:
I mean, I'm willing to go on longer. I know you had a thing, but you know-
 

benitopace: I'd be down for a 60 minute conversation, starting now.


Anna Salamon:
Oh, I don't have a thing.

Geoff Anders:
Oh, okay. Yeah, so-

Anna Salamon:
Yeah.

Geoff Anders:
Apologies to all the viewers and so forth, but yeah, let's go to 1:30.

 

benitopace: Sg!

 

Anna Salamon:
Great. So yeah, I don't know. I guess one thing I want to know...

Look, I'm going to try to tell history. I'm going to get it wrong. There's 50 other people out there who are going to have 50 other histories. Maybe we can get them all out there via mine and Geoff's to start with, or something.

But, so my perception is - I want to talk about that 2013 summit, where we all... like, 50 people or something stuffed into a giant Leverage house for like, I forget, several days.

Geoff Anders:
Yeah, I think it was 60-

Anna Salamon:
A week?

Geoff Anders:
... I think, and it was a week, yeah. It was about seven days long, I think.

Anna Salamon:
And these people were CFAR, and Leverage, and MIRI, and a bunch of EA-ers from overseas?

Geoff Anders:
Yeah, we-

Anna Salamon:
Who else was there?

 

anna_salamon: Who here was there, at the 2013 stuff into leverage house EA retreat?

anna_salamon: Anyone but me and Geoff?

larissa24joy: https://www.youtube.com/user/nnevvinn/videos

fiddlemath: Anja's here with me, and was there

larissa24joy: Videos from it above


Geoff Anders:
Well, we tried to invite basically a selection of everybody. You know, Holden Karnofsky talked there. I think we beamed in Jaan and Singer for talks. Thiel talked, but then we had-

Anna Salamon:
Oh yeah, in person.

Geoff Anders:
Yeah, yeah. Then, we had-

Anna Salamon:
For several hours. I'm remembering as you're saying the things.

Geoff Anders:
Yeah, yeah. And then we... I remember when we designed it, we tried to basically balance it to have people who were interested in the different parts of EA. I have some spreadsheet somewhere, but we tried to... We flew in a bunch of people from the UK, maybe a couple other places, but something like that. And obviously not everybody invited came, but yeah, we had... I feel like we had a mix. Like, for people who are interested, the-

Anna Salamon:
Was Toby Ord there?

Geoff Anders:
No. I don't think-

Anna Salamon:
I feel like CEA was somehow proto-there. Maybe I'm making that up. Okay. Great. [inaudible] says I made that up. I probably did.

Geoff Anders:
I think some CEA-affiliated people were there. I don't think the main people came, though they were invited.

There's a video online that, yeah, if you go to YouTube and I guess search 2013 EA Summit-

Anna Salamon:
Anyway, great, great, great. I am in a hurry.

Geoff Anders:
Yeah, go ahead.

Anna Salamon:
Even though we have until 1:30.

So the thing I want to say about that - which I'm not sure how many people would agree with me - but my own narrative is something like that there were a bunch of different people working on and having strands that were kind of related, and we were trying abstractly to think well with each other, because we all knew it was good to be cooperative or something, but we didn't have that much actual flesh on our cooperation.

And then my narrative is that Geoff or Leverage or somebody was like, "You know what would help? I think it would help if we all spent a week stuck in a house together."

Geoff Anders:
Yeah.

Anna Salamon:
And it did, and then we emerged all friends somehow. Like, not exactly friends, but all actually feeling like there was a thing that was in common, that we might call the EA thing, and that it was worth talking to each other. And a bunch of people got the AI risk thing, who came in skeptical about the AI risk thing. And I don't know, I was very self-centered about the AI risk thing. I'm not sure how many...

 

sowgone: I was at 2014, not 2013

 

Anna Salamon:
No, no, no, it wasn't... Oh, you were at 2014. Yeah, yeah. There was a previous one. The 2014 one was much larger, and more disparate. I liked it too, but it was like we were... Sorry, somebody in the chat was mentioning 2014.

Geoff Anders:
Well, we did-

Anna Salamon:
It was-

Geoff Anders:
If I could just jump in with one fact.

Anna Salamon:
Okay.

Geoff Anders:
We did both the week-long thing at a retreat place, and then we did a two-day event, like the Summit, so the retreat was like-

Anna Salamon:
In 2014.

Geoff Anders:
Yeah, so we did both in 2014.

Anna Salamon:
Yeah, but even the week-long thing to me seemed, in some ways, less good than the week-long thing in 2013, because it was spread out instead of being in one house.

Geoff Anders:
I agree. Yeah.

Anna Salamon:
Which was maybe necessary because there were more people or whatever.

But anyway, from my perception, there was a lot of camaraderie in the early days, even like 2008, 2007, up through 2013, but the camaraderie sort of congealed and became more of a thing across more of the people that week in the house. And there were always tones of skepticism, but the tones of skepticism were maybe at a local low point right after that week from my perception.

And then we have gradually, not just Leverage and the rationality community - whatever that is, sorry CFAR, LessWrong, I don't know, MIRI -

Geoff Anders:
I... think there's a community.

Anna Salamon:
What? There is a community, but it's a community that's... Yeah, there's a community, but there's also several different partial centers of the community or something.

Geoff Anders:
Agreed, yeah.

Anna Salamon:
So I wasn't quite sure which one I was trying to talk about.

Anyway, so Leverage and various strands of the rationality community were pretty harmonious there in my perception, and were a little less so before, and much less so gradually after.

But also like, I don't know - CFAR and CEA, for example. In my head, there was more of the congealment there. Like, we were all friends back then.

And now we're all... maybe friends, certainly trying to be harmonious - many of us, maybe not with Leverage, maybe Leverage is our official "figure out whether we're supposed to ostracize this month" target, we can talk about that...

But I guess I just wanted to situate the puzzle of how rationality and Leverage drifted apart, with a puzzle of: I think that in general, most of the organizations think that most of the other organizations' work isn't that good, and usually do try to be cordial and harmonious, but I think that things somehow started out much more "everybody believing in cooperation with each other", and are much less that way now, even though there's quite a bit of cordialness still.

And I guess I want to situate the splitting of Leverage with CFAR or whatever, in that larger context where I think most things split from most things. And I'm kind of curious about what was up with the larger context.

Geoff Anders:
Yeah. I mean, I agree with this broad picture. I wouldn't put the start of EA to the EA Summit, though I think people could argue about that. I think EA started before. But apart from that, I agree.

I think there is something like most of the groups have become more disparate. It's really hard to figure out how to say this without sounding biased in my own favor, and also while there's lots of concerns, and a bunch of sort of justifiable things, it's really hard to say things like this, but in 2012 and 2013 and 2014, we were working hard to try to get the different groups to be able to work together.

Like, 2013 we did the Summit. 2012, we did THINK, the high impact network. Where we wanted to have CFAR, The Life You Can Save, 80,000 Hours, and Giving What We Can and Leverage all collaborate. And that didn't end up working. But then the EA Summit did.

Yeah. And then, I feel like the cordial relations and the "more on the same team" thing extended through 2014.

For my own story, just to add a part to it, I think that I stopped wanting to... like, stopped engaging as much with CFAR after CFAR stopped doing as much research? Like, CFAR was doing more research early on, and then the research diminished over the course of time. That was something that mattered for me, personally.

But I mean even, I think in 2018 we did a Paradigm workshop specifically for CFAR people to try to, like, something. And I talked to the people about, "Is there room for some sort of alliance or working together?" And that didn't go anywhere, but who knows?

Anna Salamon:
Huh. I don't remember this. Was I invited?

Geoff Anders:
No, no, this was the-

Anna Salamon:
Is it specifically for CFAR people...?

Geoff Anders:
I think at the time you were... I haven't always tracked when you're in charge of CFAR. Like, there's at some times...

Anna Salamon:
But I've been working at CFAR continually.

Geoff Anders:
Yeah. So-

Anna Salamon:
I wasn't ED, but I was totally working there.

Geoff Anders:
Okay. Yeah. So I-

Anna Salamon:
I've been in a leadership role.

Geoff Anders:
Okay. Yeah. So this was a reach out to Eli Tyre et al., I think.

Anna Salamon:
Okay. Yeah, yeah.

This is not very charitable of me at all, but I think I had a perception that you were trying specifically to recruit our junior staff and that you weren't that interested in talking to people like - by, recruit, sorry, I don't mean to pull them away from CFAR, but I do mean to sort of ideologically recruit them - but that you weren't that interested in me because you were less likely to get that much of my mind or soul or something. Confirm or deny?

Geoff Anders:
Yeah, no, it's a good pointed question.

How do we draw a distinction between "get you ideologically" and "have constructive conversations, wherein both sides update"?

Anna Salamon:
Um. I don't know. I do think there's... Well, what do you think there... I think these are quite different. I think they're both great things, depending on context.

Geoff Anders:
Sure.

Anna Salamon:
Especially the second one.

Geoff Anders:
Yeah, yeah.
 

anna_salamon: Anja, would love to hear if your perceptions match/differ, what you remember

fiddlemath: Anja: matches up so far

fiddlemath: Anja is carrying the baby, but -- she has a pet theory that a lot of the friction / tension traces back to Leverage overstepping boundaries with their recruiting practices.

AgileCaveman: The vibe here is two people talking about a failed relationship, except it's two large orgs.

habrykas: I can confirm memories of tension that was associated with Leverage's recruiting practices

habrykas: From the 2014-2016 era


Anna Salamon:
I often learned from conversations with you, though. I remember going over to your house a bunch of times with Nate, it seemed good. You seemed less interested.

Geoff Anders:
What year was that?

Anna Salamon:
That was, I'm going to make this up, my made-up year is 2016 or 2017.

Geoff Anders:
Yeah, I-

Anna Salamon:
Remember?

Geoff Anders:
I was talking - [crosstalk] - I was talking to somebody who jogged my memory on that. And I do think that was less important to me at the time. Like, the...

Anna Salamon:
Okay, so my yeah jerk story as to why we drifted apart - I can tell jerk stories that emphasize me as the bad guy, I'm going to emphasize you as the bad guy -

Geoff Anders:
Well, y'know...

Anna Salamon:
- but you're welcome to turnabout or whatever.

Geoff Anders:
Yep.

Anna Salamon:
My jerk story is, you were busy trying to build an empire for reasons, that you thought would be able to build toward good things, or whatever. And we didn't look like good ways for you to do that. Partly because we related differently to power than you do, or power, closure, how much of your stuff to keep inside in what way. And partly because, as you say, we were not doing that much of the kind of psychological research that you guys were trying to do, so we weren't particularly project-aligned in that way. And I think it's cool that you guys were trying to do that kind of thing. And I think in CFAR, a lot of us were curious as to what your stuff was and whether it was cool. So I think that was one of the things.

Anyway, I think you were trying to build a thing with power where you could control it, and I think alternate perspectives were not that interesting to you on a visceral level for that reason. And this is part of why we drifted apart.

Geoff Anders:
Okay.

Anna Salamon:
And I think I looked more like a... kind-of-like-resource-kind-of-like-obstacle, whereas some of the junior staff looked more like people you could recruit.

Geoff Anders:
Yeah. But then, you said "recruit". So, if I could ask some questions about this. So, you said "recruit, but not hire".

Anna Salamon:
Well, maybe hire, but "hire" isn't the important part.

Geoff Anders:
Okay. Right. So then I guess my question is something like: I do end up forming views on who I'll be able to work with, more and less and so forth. And then, I mean... there's, like, a nefarious-themed-ness to the thing you're saying. But how does the...

Anna Salamon:
Should I try to actually spell out what the thing is that I think is bad?

Geoff Anders:
Yes! [crosstalk] That would be great. Because I can tell the same story without the nefarious theme. Yeah.

Anna Salamon:
Yeah. Sorry for accidentally having it in the connotation.

Geoff Anders:
Oh, it's okay.

 

2. (2:13:22) Narrative addiction and EA leaders being unable to talk to each other

Anna Salamon:
Okay. I'll take a couple steps back, explain a bit of worldview in which I think it contexts why a thing I think might be bad might have been occurring. And then say the thing, sorry.

So, I guess I've been playing around a lot lately, in the last six months or something, with the concept of narrative addiction. Where in my head... I don't know, I'm gonna talk about stuff I don't know anything about, nobody should believe me if you think it's false. If you guys could just figure out which parts are which, that'll be great.

I tend to think of, for example, anorexia. Which I know nothing about, I'm just going to terribly say bad things about it as a metaphor while violating all rules. I tend to think of anorexia as being something maybe sort of like narrative addiction.

Like, it's addictive, but it's not addictive the way heroin is addictive. It's not physical, but there's a sense of control or something. And you want to be in control. There's a narrative, or something, and somehow something will get better in the narrative. In extreme cases, people cling to it even at the expense of life. That's probably horribly wrong for anorexia, and I apologize to everybody for using it as a metaphor.

But the thing I actually want to talk about is I think, for me, I spent a whole bunch of the last five years or something in what I would call narrative addiction.


Geoff Anders:
Hm!


Anna Salamon:
Yeah, sorry. Accuse me of a thing and then I'll go accuse you of the thing. And then you can tell me that it's false about you.

Where according to me, there was a thing that I wanted, which was something like "to make the world a lot better." And a story that I had in which I was doing a thing like that. And then various things would happen that were kind of uncomfortable, or didn't feel good, or whatever. And I would sort of retreat to perching on this narrative and to doing things that would locally reinforce the narrative for me.

And I think basically this was pretty costly and not very good for my happiness or my effects in the world, or my ability to have actual conversations where I was listening to the other party.

I think a thing that happens to a lot of people in positions of relative power and influence in a social graph - similar to the one that I was in, and similar also to the one that I believe you were in - is somehow only doing the things that reinforce the narrative, and having much more trouble than other people have hearing perspectives that aren't their own.

So, like, in terms of this fragmentation of the initial EA community, it seems... Like, I had an interesting conversation with some people several years ago where they were like, "Yeah, I feel like if we stuck..." I can't remember who they actually named, but something like, "I feel like if you stuck, for example, Anna and Geoff, or Geoff and Eliezer, or Eliezer and Toby Ord, or whatever, in a room, it would actually be like relatively..." I don't know about Toby. Maybe Toby's better. "... It would be relatively difficult for most of these pairs of people to have a real conversation. But if you take random people from the organizations, they do fine. What's up with this?"

Geoff Anders:
Right.

Anna Salamon:
We were like, "Yeah, that's kind of interesting."

 

spiracularity: I like the way this is going, this feels both interesting and I think it hits something valuable.

larissa24joy: +1

AgileCaveman: oh yeah super valuable,

larissa24joy: something like the narrative addiction concept rhymes with some of my EA experience so this is an interesting and useful concept (thanks Anna)

fiddlemath: Geoff, for later: Some of the narrative addiction effect may come from organizations growing larger and more of their members navigating reality via narratives and social cues


Geoff Anders:
I definitely just affirm - I definitely think there's a leader's thing. In different fights and so forth that happened at different times, my observation was, the leaders were more opposed. The people were like, "Can we get along?"

Anna Salamon:
And they could, and the leaders couldn't; and the leaders also couldn't talk, which is more the interesting thing. Not just couldn't get along.

Geoff Anders:
Yeah, yeah.

Anna Salamon:
Anyway, so I believe you... So, look, the negative thing that may or may not have been the case, is that you wanted people who were "small" enough that you could - and I think these people were great and were on great growth trajectories, but nevertheless, to say terrible things - you wanted people who were "small" enough that you could intellectually dominate them and they would follow your thing, rather than being willing to loosen your grip on narrative addiction or something enough to be able to hear alternate perspectives. That would be the-

 

3. (2:17:14) Early Geoff cooperativeness

Geoff Anders:
Okay. I definitely think that there's at least... Like, I definitely think I didn't appreciate really at all the work-life distinction and the value of doing things that weren't for the mission and so forth. So I think that's definitely there. But if I could ask a question about the thing, what was the narrative that-

Anna Salamon:
Mine?

Geoff Anders:
No, no, for-

Anna Salamon:
Yours?

Geoff Anders:
Yeah, because from my perspective... I mean, you may recall this and this may be another source of tension. Initially, I wanted to just merge the organizations, right? I tried to be hired as research director at MIRI. I don't know if you recall this, but-

Anna Salamon:
I remember you tried to get hired at MIRI, at the very beginning, yeah.

Geoff Anders:
Yep. Yep. But that was like, Leverage was super small then. And I'm like, well, we have some differences, but there's a... There's the inside-Leverage thing, where there's like the different projects working in parallel and helping each other. I also thought about that with regard to external organizations. I talked to Will MacAskill in... 2012, I guess. Was it 2012? I think 2012. And I was like, "We should maybe merge Leverage and Giving What We Can." And he was like, "Yeah, good idea," or something like that. And then it didn't happen. And I don't know if he was... I'm not sure what his attitude was on that.

But, like, even take something like Giving What We Can, which is focused on global poverty, or was focused on global poverty. I always had the perception that it just would be valuable for the other projects to build up a really big Giving What We Can thing. I mean, if you have somebody build up Giving What We Can in a very large, substantial way, that just will in fact pick up more people who are interested in animals and building EA and AI and Leverage's style of research, and et cetera. So there's a thing that I'd like you to try to square with-

Anna Salamon:
[crosstalk] Rationality is the common interest of many causes. Yeah, we had this in the beginning, then it became rational-

Geoff Anders:
Right. But, so square that with the idea of needing people to be in my thing.

Whoops, sorry. Hold on, my phone is beeping-

Anna Salamon:
Well, okay. So my story about you and me, and many people, is that we started out not having as much of the narrative addiction and ended up more in the narrative addiction. So I agree that at the beginning you didn't seem that much like you were doing this, compared to my stories about you later on.

Geoff Anders:
Okay.

Anna Salamon:
I have the same story about myself and several other people.
 

4. (2:20:00) Possible causes for EAs becoming more narrative-addicted

Geoff Anders:
Do you have any idea what would have caused us to become more narratively addicted? Because I find this to be really an interesting hypothesis. I think there's some ways that I may not have had as much perspective on things, especially near the end before dissolving Leverage. And so I'm really interested in this hypothesis, but I haven't been able to... It's really hard to see how you lose perspective.

Anna Salamon:
Yeah, it is hard to see how you lose perspective. You almost... Yeah, you need to look from inside and from outside at once, or something.

Geoff Anders:
Yeah. But maybe you have leads on... or, what...

Anna Salamon:
Yeah. I mean, I don't know. One story - this is a very, very generic story, I think it would also be nice to try to track this one with Leverage and CFAR specifically.

Geoff Anders:
Sure. Yeah.

Anna Salamon:
But sort of as a case study in this larger thing, almost. In addition to being a thing in itself.

But, yeah, I have a story that's something like... Many, many actions run partly on promissory notes. (Borrowing a lot of this, by the way, from Bryce Hidysmith, so if anybody likes these ideas, I think they should give some credit to him.)

Many, many actions run partly on a promissory note. I make myself tea, and this is because I can visualize the tea which is located in the future, and it's calling to me, and I believe that if I put my cup in the microwave I'm going to successfully get some tea, and it's going to be great. Right?

But there's the question of how much of the thing is a promissory note that hasn't been cashed yet? Like, what the ratio is of "expectation of the future" compared to "realized stuff right now". And what ratios those have in pulling our actions.

Geoff Anders:
Right.

Anna Salamon:
The EA movement was very, very high in "we're going to have a whole bunch of glorious stuff in the future, it's going to be amazing", compared to how much we had actually done so far in terms of lives saved, or whatever.

Geoff Anders:
Right.

Anna Salamon:
One story I have is that we got together, and initially we all hoped to do the best thing. And the best thing was basically, we were all going to cooperate and it was going to be great. And then various bits of data came in, and the data was not super coherent with the best thing that we were hoping for. And so we retreated farther and farther and farther into smaller and smaller scope of what we were taking in kind of narrative addictions. And less and less willingness and ability to listen and bridge. But I mean, that's an abstract story and then we can tell a more mundane story.

 

habrykas: Huh, at the time, it felt like EA was almost only focused on doing things right now

larissa24joy: From my own experience of something that might be like "Narrative addiction," I feel like I lost some perspective out of fear/responsibility. I felt hugely responsible for EA (more than makes sense on reflection) and I feel like that led me to tunnel vision focus on EA and trying to make that go well.

habrykas: Like, the movement was primarily about global poverty things and high-standards of evidence

 

5. (2:22:34) Conflict causing group insularity

Geoff Anders:
No, but that's interesting, because I think the... I've been tracking a different aspect that maybe fits with the narrative... I want to think about the narrative addiction part. I definitely think there's truth to that, so that seems like a good part, and I want to think about it.

The thing that I've been tracking is the ways that groups become more insular as a result of conflict. Where, so, really simple examples like: Leverage had a lot of information on its website back in 2011. And then we got a bunch of critique, and we weren't getting a lot of positive engagement. And so then we eventually, by 2015 I think we're down to a splash page.

But then there's something similar. So right now I think one of the potentially really negative, I would imagine unintended, side-effects of the current thing happening right now, is when there's fighting it sort of scorches the earth between organizations. Where it's something like... Yeah, go ahead.

Anna Salamon:
I guess... sorry. In my opinion... No, no, I had a thing to say, but maybe I should hear the end of your sentence.

 

spiracularity: Someone also appears to have petitioned to get the old Leverage website taken down from Internet Archive, too.

spiracularity: ...why did you guys feel like you needed to do that, anyway?

habrykas: My memory of the reasoning (when I asked people about this many years ago) was something like: "People keep digging up a bunch of stuff I said and keep taking it out of context. Like, that one time when I said 'Connection Theory is a complete theory of psychology', by which I meant 'Connection Theory is trying to be a bold stick-out-your-neck type theory that aims to be complete, but look, it obviously doesn't explain everything, but this is the thing it's aiming for'"

habrykas: "And people kept accusing us of saying wrong things in annoying ways that seemed pretty confused, and it seemed like overall when people looked at the archives, they got more confused than enlightened"

habrykas: But someone should feel free to correct me

larissa24joy: Something like Habryka's account seems right to me but that's based on similar conversations with people about their recollection, not from any knowledge at the time

larissa24joy: (so people around at the time should totally fill in gaps instead)


Geoff Anders:
I mean, I think that there are... Basically, I think there's lots of forces that produce insularity. And I think something like continued unresolved-but-still-real conflict is one of them. I'm not sure that that's true, but it's like, if you have like... Let me just add, it'll just take a second. You've got red team and blue team on different sides, right? And each has a negative thing about the other that's not being expressed, but it's being held on to. And I'm not saying it's not - it could both be real. It's not meant to not be real. But I think if that happens, then it's really natural for the things to sort of curl in and not... Yeah, basically become more insular. But yeah, go ahead. You were going to say a thing.

 

6. (2:24:51) Anna on narrative businesses, narrative pyramid schemes, and disagreements

Anna Salamon:
Yeah. I was going to say, I think there's different kinds of... Sorry. A different word that I want to introduce besides narrative addiction is "narrative pyramid schemes".

Geoff Anders:
Okay.

Anna Salamon:
Where you get a bunch of people buying into a narrative or something, and this is how you're able to... Sort of a group-level version of something like narrative addiction. But you get a bunch of people buying into a narrative. Because they bought into the narrative, they'll do different things depending on what you say. So they'll work for CFAR, they'll work for Leverage, they'll go take a bunch of actions in the world that you want.

Geoff Anders:
Yeah.

Anna Salamon:
But they'll do it because they're expecting this narrative to pay off in some fashion. And the thing... yeah, I could just say "narrative business" or something rather than "narrative pyramid scheme". The thing that makes it a narrative pyramid scheme is if it only works insofar as the narrative is able to get other people to buy into the narrative, is able to get other people to buy into narrative, and in this way-

Geoff Anders:
Sure. Well, yeah. I mean-

Anna Salamon:
- whatever, the movement can grow, even though the movement doesn't really have an actual business model that produces value in the sort of... Anyway, you could agree or disagree with that, but I want to connect it back to what you were just saying.

Geoff Anders:
Sure.

Anna Salamon:
And then you can reply. It seems to me that there's lots of different ways that groups A and B could be in conflict. One way we could be in conflict is by, I don't know... well, maybe there aren't. Maybe we just have different research agendas and we're just trying to do our different research agendas. And we just think each other's research agendas is going to be dumb. And so we're most [inaudible] to do the research agendas. And in the end it'll go to the glory of group A, I mean of group B. Okay, that's a kind of conflict. It's not a very active kind, because we're not really disputing what to do with a resource. But anyway.

Geoff Anders:
I do think that can be an actually real thing. Different research projects are based on different assumptions about the world. And I think you can-

Anna Salamon:
You can have pretty important disagreements that way. I'm not sure whether they're conflicts, exactly.

Geoff Anders:
There's going to be some reason why they're not being expressed or being discussed fruitfully, right?

Anna Salamon:
Well, you could have... in particular, I could imagine a set of people who have really interesting intellectual disagreements while disagreeing about which research are going to be fruitful or whatever, and it's just really fun. I don't know if that's a conflict, it's like conflict...

Geoff Anders:
Oh yeah, no, no. Yeah. I agree, you can have it without the conflicty part. Yeah, basically.

Anna Salamon:
But the interesting thing to me is that if you have two groups that are each pulling on their own people via narrative, and if the narrative is in some important sense false - I don't mean it's 100% false, but it's one of those things that could be destroyed by the truth, in some sense...

Geoff Anders:
Yeah.

Anna Salamon:
Maybe by pointed, carefully selected truths designed to destroy it, or whatever. But it's not robustly "just go notice this stuff". It's, like, a narrative.

Geoff Anders:
Right.

Anna Salamon:
Then conflict between the two groups I think tends to create dead zones of non-conversation. I don't know, because... sorry, maybe you could just actually flesh this out into a model that I'm not articulating yet.

 

7. (2:27:51) Geoff on narratives, morale, and the epistemic sweet spot

Geoff Anders:
No, that's super interesting. I want to add a piece, though, that I think relates to maybe disagreement, but I think is important, and also I think relates to the rationality/Leverage divide.

I think there's a bunch of dangers associated with narrative. The narrative addiction thing, I think, is real. I want to think about it and understand it better, et cetera, but I've noticed some things that are similar - the thing I've been thinking of is in terms of "fantasies" basically, and people getting trapped in them. That's my own language.

But there's a way in which narratives are really, really valuable. Like, narratives can help people to maintain morale in circumstances that are really bad and really hard. And then I think this is one of the flash points between Leverage and rationality, where I think of it as a very careful navigation. If narratives are misleading and wrong, people become epistemically less good; but I also think that if your morale gets too low, you also become epistemically less good.

And so I think the epistemic sweet spot is - the best thing is "you know how to do things and you can accomplish them, and you can see all the truths and everything works". But then, as a person's ability to accomplish things, or the quality of their position goes down, then I think frequently narratives are useful, and sometimes necessary, to maintain morale. And that if you graphed narrative versus epistemics, it's not "epistemics are the best when narratives are the least". I think that there's frequently - like I said, morale. Morale sometimes helps people to look for paths for hope. You can have a person who just doesn't think about how things could be better. So I think narrative is dangerous, but also useful. And so it has to be used correctly, roughly.

 

spiracularity: You know what else is often surprisingly adaptive to negative environments, then becomes maladaptive but hard to deconstruct once you're in other circumstances? A lot of trauma responses!

329zwydswr: what's morale, other than a belief that a plan will work?

spiracularity: (The tone is not intended to be as bad as it sounds, though. I just think they're both STICKY.)

WaistcoatDave: @spiracularity exactly. and as everyone is human, they come into these situations with preexisting identities'e

329zwydswr: that seems to conflict with the thing about morale

WaistcoatDave: identities, experiences and often trauma

larissa24joy: Can you elaborate @spiracularity ? (I feel like there might be an important thing here but I don't quite grokk it yet)



8. (2:30:08) Anna on trying to block out things that would weaken the narrative, and on external criticism of Leverage

Anna Salamon:
Yeah, so I guess the thing I would say is narratives... are great. Like, even something like "I'm going to put the tea in the microwave and then it'll be warm and I can add a tea bag and then I'll have tea" sort of a narrative. It's a carefully selected small portion of a thing that I can use as a storyline. Larger-scale narratives are great too...

Narrative addiction I would like to distinguish from narratives, and to say that it has to do with the specific motion whereby I'm trying to block out things that would otherwise weaken the narrative.

And I am sure there are local benefits that can be obtained by blocking out things that will weaken the narrative. But my current position is that we just shouldn't do that. Like, it can help locally, but it creates something like technical debt. Let's not do it.

Geoff Anders:
I roughly agree. I roughly agree. I mean, I think there can be cases, but roughly I agree with that.

Anna Salamon:
Okay, I... don't believe you that you agree.

Um. Or, sorry, I'm sorry, I believe - I'm sure you tried to tell the truth and so on. But I would like to bring up things that I think you also think, that I think are in tension with what you just said, and see what you say about it. I mean...

Geoff Anders:
Great. Yeah, sure. Yeah, yeah. Great.

Anna Salamon:
So, I think... So, look, I - mm. It's hard to say all the things in all the orders at once. I'm going to say a different thing and then I'll [inaudible], sorry.

So, once upon a time I heard from a couple junior staff members at CFAR that you were saying bad things to them about me and CFAR.

Geoff Anders:
[I] believe it.

Anna Salamon:
I forget. They weren't particularly false things. So that I don't accidentally [inaudible]-

Geoff Anders:
Okay.

Anna Salamon:
Whatever. Personal view is that it was probably partly silly that I was upset about this. It was not my view at the time; we can talk about it. But anyway, the reason I was upset about it is basically because I did not hold the attitude... I don't know. Sorry, I could say...

I think it was partly that it didn't reach me very naturally. I think that was more of a legitimate objection. But, whatever, we can talk about it, I probably am getting the facts of the situation wrong while accidentally maligning you on Twitch TV.

Geoff Anders:
I'm sorry if I... did an incorrect thing there. So I'll just let you know that-

Anna Salamon:
Well, let me-

Geoff Anders:
Go ahead. Go ahead. Go ahead.

Anna Salamon:
Thanks. I appreciate it. I should skip you for the story. Once upon a time there were various past contexts where I would be upset by people seeming to me to weaken CFAR's narratives in its staff. I currently think that that was a mistake on my part. And that it violated this heuristic that I currently think we should have, where you don't use narratives to try to block stuff out.

I think there were also past situations where you, Geoff, were upset at people saying bad things about Leverage in ways that made it harder for your staff members to have, as you would put it, morale. And I'm not sure how that's consistent with the thing you just said about how you basically agree that one shouldn't use narratives to try to block things out.

Geoff Anders:
Well, do you have cases in mind? That'd be helpful.

Anna Salamon:
I have only generic cases, but I bet you can fill in the specifics.

Geoff Anders:
Yeah. Try a generic case.

Anna Salamon:
I think that you're all, like... I don't know, the straw Geoff in my head is all like, "Man, the rationale of the community was mean to us, because they kept saying bad things about Leverage in ways that made it harder for our staff members to have morale. I wish they would stop it. They kept saying it specifically to staff members and so on, and I don't like it. They shouldn't-"

Geoff Anders:
I mean, I do think that that's true. I do think that Leverage - this is one thing that I think is a really unappreciated aspect of this whole situation. It's like, a whole bunch of Leverage staff just got tons of negative stuff from people. And it was, like, as soon as you joined.

Anna Salamon:
I know, I know. I would be afraid... just to put it out there, I would be afraid to say positive true things about Leverage because then people would get mad at me.

Geoff Anders:
Right. So I-

Anna Salamon:
And then I would say fewer positive true things about Leverage than I thought, even though... Like, whatever, I've thought various true bad things and various true good things. Well, of course I think my things were true. I thought various good and bad things. But like-

Geoff Anders:
But it's super nice to hear you say that. I mean, wow. Okay. All right. Well, good. I feel like we're having a good conversation, at least from my side. Hopefully this will turn out good for you also. Okay. I...

 

AgileCaveman: it was somewhat surprising that some really vocal critics of leverage later applied to join. There was certainly an under-current of envy

habrykas: @AgileCaveman : Huh, do you have an example?

habrykas: None come to mind for me, but I am of course not omniscient

z1zek100: I think there are cases the other way. People wanted to work at Leerage, didn't for whatever reason and then were very upset.

AgileCaveman: yes, i have a couple people in mind, but i'd rather not name for privacy sake

habrykas: Yeah, seems sensible

AgileCaveman: z1zek100 yes, true as well

spiracularity: To be fair, you're taking Rationalists who love 'ferretting out weird rare elusive dangerous truths," and then do stuff like taking down your website. Some of them being envious and then joining doesn't surprise me. Some of them assuming you had skeletons doesn't surprise me, but I agree they could be pretty brutal sometimes.

 

9. (2:34:24) More on early Geoff cooperativeness

Geoff Anders:
Definitely one thing that was a practice, a part of Leverage culture, at least as I understood it - I'm going to say a pro-Leverage thing, but I'm going to say then a negative thing after that. The pro-Leverage thing is: As I understood it and as I experienced it, we had a practice of trying quite hard to understand the value of different projects as part of assessing potential collaboration. And so it was really important to us, something like "how much original research was happening at CFAR". And the more original research, the more it would be like, "Okay, good," because that's something we could conceivably learn from. And also, at least in my... It's just better to have groups that are... If they're research groups doing more research, then that's good.

I think that that's the positive thing. I mean, there is just a lot of "try to say the positives and negatives", and then a way that this may have... One of the things that's been coming to my attention as I've been thinking about it, is the way that something like culture is not always transmitted. Like, some of the things about narratives, some of the things that have been said I was really surprised to hear that people were using narratives in that way. But it's quite possible that people were using the relevant negatives [sic] - something like, trying to paint an overly negative story. But, I mean... Yeah, I don't want to, again, be overly pro my own thing because there's obvious biases and so forth. But it's like, if, I mean... yeah, it's... anyway.

Anna Salamon:
Maybe best to just display your own thing with obvious biases mixed in, and then anyone can complain about them.

Geoff Anders:
Yeah. I mean, okay. Because invited, and in the name of transparency:

I think we tried hard to cooperate with the other groups. I think we reached out. I think we ran events. I think we did, like, a whole bunch of things. We, like... see, look, now I'm going back into the "We did many things wrong," which I do think is true, but... I think that it just would be nice to not be excluded or cut out of the histories, or et cetera. So it's like, you look at tellings of... Look at the EA Global Wikipedia page and see what you think about whether or not we're all, like, being included and so forth.

And so my big update from 2012 was: I had done a whole bunch of reaching out and trying to cause collaboration, and people just weren't interested.

And so, I thought about it like: I took a hard lesson. Like, the hard lesson was "the other people don't want to collaborate."

And then I had all these theories - like, one of the theories was that it had to do with planning horizons, where it seemed like people were very focused on what would be useful for their organization in the short run. And I know I get critiqued to death for having, like, really long-term plans with all of these parts. One of the things that happens if you have that is you just see ways that other people can fit in.

And so it's like, huh, if there's like... I mean, this is part of the EA Summit "all the groups sort of aid each other in various ways" thing. And THINK was meant to be a collaboration. So from my - maybe this is biased, but if score it, I just give us great marks on trying, and I don't give the other groups great marks on trying.

And so, yeah, people can be mad about that; it does seem... Yeah. Okay. Let me just say one other thing. I think this is important.

I think there's definitely a way in which I and other people at Leverage have been beaten down by the constant narrative sort-of opposition.

 

fiddlemath: Anja says: Geoff -- I think the people in the Rationality community who say negative things about Leverage, are/were in an arms race with an intention on Leverage's part to withhold details that could be negatively construed.

spiracularity: @fiddlemath EXACTLY.

fiddlemath: yep

spiracularity: Add an extra layer of "you guys were not optimizing for outside legibility", so the straight words didn't really convey things very well.

 

Geoff Anders:
And part of why I wanted to have a Twitch channel was just to stop doing that. And this is pointing to an area where I think I-

Anna Salamon:
Sorry, stop doing which?

Geoff Anders:
To stop, like... The thing I want to say is something like, "I think we valued collaboration and so forth." Whereas, the thing that I should somehow be able to say was: We tried to collaborate. We offered help. If other people had wanted to collaborate, it would have happened. That it counterfactually, causally depends on the other people's not wanting to collaborate. If you, Anna, had come to me and said, "Geoff, let's figure out how to do a joint psychology research program and have a bunch of workshops," I would have said "Yes," or I would have said "Yes, if," and then worked out some complex thing.

I'm sorry that that's coming across as, like, directive or determinative, or I don't know - the way I'm saying it. But I'm, like, pushing through-

Anna Salamon:
I appreciate that you're putting your narrative out there in a clearer fashion. It seems nice. It seems good for conversation, to me.

Geoff Anders:
Okay. Well, it's - yeah. I'm just expecting to get critiqued to death on this because-

Anna Salamon:
Well, I mean, I think that is one of the things that happens if you say something clear - is people can respond. And that seems good too.

Geoff Anders:
It is notable, also: I'm usually okay being critiqued to death, or whatever, but, like - Okay. Useful. Useful. There's my narrative. I want updates. I... anyway. Yeah, you know. Okay, go ahead, say something.

 

LuliePlays: One thing I'm finding odd about this conversation about narratives is that there were accusations of demon seances -- I'm curious what @anna_salamon thinks about that stuff, or if it seems irrelevant to CFAR's relationship with Leverage

329zwydswr: hm. sounds like geoff is describe a sort of progressive escalation of narrative warfare. like a feedback loop of perceiving a slight narrative tilt -> tilt right back, to correct the scales -> other people push back -> repeat

Turbowandray: +1

habrykas: My current model is we are talking about Leverage from 2013-2017

habrykas: Which did not involve any demon seances

Turbowandray: (to clearer position/etc)

habrykas: By the time demon seances were happening, I think relationships between CFAR and Leverage had mostly come to a stop

spiracularity: Maybe your narrative was protecting you from feeling critiqued to death, but had costs elsewhere? (speculative, though)

DaystarEld: "One thing I'm finding odd about this conversation about narratives is that there were accusations of demon seances -- I'm curious what @anna_salamon thinks about that stuff, or if it seems irrelevant to CFAR's relationship with Leverage" +1

z1zek100: +1



Anna Salamon:
Yeah... Um, sorry, somehow I'm now wanting to read the sentence that has my name in it from the chat.

Geoff Anders:
Yeah, it's really hard to navigate when people are saying interesting things.

Anna Salamon:
Oh, yeah. They want us to talk about different stuff from what we're talking about, but I think I should reply to you first. So I'm going to do that, and then we can go there, or not.

Yeah, so... I agree... Like, my perception... I agree that I remember a bunch of concrete things from the early years, like 2012 through 2015 or something - 2014, I don't know - in which Leverage was doing a bunch of things that I would call trying to collaborate. Especially things in the vicinity of "trying to collaborate on concrete projects that were meant to bring resources to both groups", or whatever.

Geoff Anders:
Right. Yeah. Yeah.

Anna Salamon:
I recall Leverage as being extraordinarily, by local standards, open to and initiative-y in proposing things of this sort.

Geoff Anders:
Right.

Anna Salamon:
And I believe you, that you would have been receptive to such initiative from other groups [inaudible].

Geoff Anders:
Really?! Wow! Really? Okay.

Anna Salamon:
My brain really thinks I should be adding some sort of "but" afterward, and I keep looking to see what the content of the "but" is, and my brain's like, "But... I don't like the rearrangement of egos that happens when I say that sentence!" And I'm like, "Great, brain. Can you give me something with more content?" And my brain's like, "I think there's probably something to say. Didn't Leverage..."

And now I'm going to try it, because there's sort of "epistemic status: kind-of-like rationalization".

 

10. (2:41:45) "Stealing donors", Leverage's vibe, keeping weird things at arm's length, and "writing off" on philosophical grounds

Anna Salamon:
There was that incident at the beginning where everybody was very upset because Leverage was "trying to steal our donors" or something - like, going down some sort of donor list.

Geoff Anders:
Yeah-

Anna Salamon:
What was that? I don't remember it.

Geoff Anders:
Yeah, yeah. So it's-

Anna Salamon:
Anja or somebody brought it upthread, is why I remembered it at all.

Geoff Anders:
Yes.

Anna Salamon:
But once she did, I was like, "Yeah, yeah, yeah! That's how we knew Leverage was bad and out to get us!"

So, I actually don't think... Sorry. You can reply to that in a second. I'm just going to keep babbling for a second, though.

Geoff Anders:
Yeah.

Anna Salamon:
Actually I think there's something different, though. I think Leverage came - and by "Leverage" in the beginning, I mean you, Geoff - came in with a different pattern of something that I think a lot of people had an immune response to.

Geoff Anders:
Yeah. I agree.

Anna Salamon:
And a different pattern of something - it was partly the particular way that you weren't into a particular kind of materialism. The "pole through the head" thing - I can say this more slowly for people to follow it. It was partly that you... I don't know, you told me a story when we first...

Sorry. I guess I'm inclined to just say all the things, but I-

 

WaistcoatDave: I think there's a toxic narrative that critiquing is entirely fact based and therefore no time or focus should be put on HOW the points are raised. It's clear the consequences of that has put Leverage staff on edge about entering conversations

habrykas: Content I remember: "Leverage reached out to a bunch of people on the top MIRI donor list, and some people were upset about that."

habrykas: I think the people who were upset about that were kind of wrong. But I thought they were right for a while

sowgone: Cartesians vs Dennettheads

WaistcoatDave: and ultimately open conversations in the face of that fear and where that's supported by the community, is how things are moved forward healthily.



Geoff Anders:
I have interjections, but you know, it seems good. The thing I, you know-

Anna Salamon:
You told me a story when we first met, I can't even remember how it went, but it led me to think that you sometimes were, like, sort of more towards manic than most people. It was something about high school. You can fill in the details if you want, or ask me to, or we can skip it.

Geoff Anders:
Definitely energetic.

Anna Salamon:
It wasn't - It wasn't particularly [inaudible]. 

Geoff Anders:
People have accused me of being hypomanic at times.

Anna Salamon:
Yeahh... I was like, "Okay, this guy, I feel like he's..." Anyway, whatever.

That, plus the materialism... I think you came in with a thing that a bunch of us - not so much me, although I went along with it-

Geoff Anders:
Can I?

Anna Salamon:
But a bunch of us wanted- Mm-hm?

Geoff Anders:
Well, so, interestingly - I mean, we're getting to a bunch of things. So, my experience was that literally within thirty seconds of our meeting, you had written me off on philosophical grounds because I... I visited SingInst, I got off the plane, I came out, I got picked up by you, and we started talking.

And maybe it wasn't thirty seconds. Maybe it was sixty seconds. But literally the first thing that happened was that I was subjected to, in my perspective, a philosophical litmus test. I failed. And you were like, "Okay, not that one."

And so that... You know. What do you think about that?

Anna Salamon:
What do I think about that? I feel-

Geoff Anders:
Do you remember it?

Anna Salamon:
I don't remember it, but I believe you.

Geoff Anders:
Okay.

Anna Salamon:
Um. Yeah. If you want to fill in more details, it might help me remember it. But probably [inaudible] gotten you from the airport.

Geoff Anders:
No, no. It's- No, I don't remember.

Anna Salamon:
Are you sure it was me, for that matter?

Geoff Anders:
Yeah, absolutely. Yes.

Anna Salamon:
Great. So, I don't feel skepticism about it. So I do have feelings about it, despite not remembering it.

Geoff Anders:
Yeah.

Anna Salamon:
I remember later on, in what I assume was that same visit, in my head, we were sitting.

But what year was this? In my head, we were sitting in the house that I was living in in 2011, 2012 in Berkeley.

Geoff Anders:
I do remember talking in that house a couple times. This was 2011. Yeah, this was in March 2011.

 

sowgone: I think I (Rob Bensinger) wrote you off immediately on philosophical grounds. (basically 'Geoff is Cartesian + Geoff is' confident')



Anna Salamon:
Yeah, but Rob Bensinger was not who got you from the airport, even though he's saying that he remembers something similar.

Geoff Anders:
No, I don't remember Rob Bensinger being at SingInst at the time.

Anna Salamon:
Rob wasn't even around yet.

Geoff Anders:
No, this was this right before Luke Muehlhauser came in as the ED of SingInst, which turned into MIRI.

 

habrykas: I think I docked Geoff a lot of points on philosophical grounds

DaystarEld: "By the time demon seances were happening, I think relationships between CFAR and Leverage had mostly come to a stop" - Ah, this makes sense, but is the point of the conversation then to try to clear the air about Leverage, or to try to help people understand why it diverged from the rest of the community? (I only joined the call in the last 30 minutes)

habrykas: I still engaged with Leverage a lot though, so definitely not "writing off"

spiracularity: On demons: Look, I think Leverage's internal phrasing around it is toxic BS that is kinda bad for some people. I personally have never had anyone flip-their-shit when I explain social contagion, so feel free to reach out for phrasing-for-normals help if you want.

habrykas: But I think I also stand by that docking of points

sowgone: I wasn't involved then, I'm just saying I am a MIRI person who wrote you off later

habrykas: Oh, that's my docking of points

habrykas: Yeah, that's what I said too

spiracularity: (I also think it's bad and panic-inducing in ways that aren't true; analogies with Roko warranted.)



Anna Salamon:
Yeah. Sorry, I'm just going to randomly respond to parts of the chat without having read [inaudible]-

Geoff Anders:
Fine, fine, all right, well.

Anna Salamon:
Ollie Habryka is all like, "But I think I stand by that docking of points." I haven't even read his context, so I'm just responding. Personally, I stand by docking of points or something? Maybe?

What I don't stand by is a "having written off". And the reason I don't stand by the "having written off" is to my mind, it resembles this thing I'm calling narrative addiction.

Geoff Anders:
Yeah.

Anna Salamon:
So, like, to respond to Geoff's question from a minute ago: Geoff, you were all like, "How do you feel about it now?" How I feel about it now is I try to turn my mental eyes toward this memory, which is somehow located in my chest - maybe this is too much phenomenological detail-

Geoff Anders:
Yep-

Anna Salamon:
And I feel a cringe. I feel a mental cringe, or something, as I look. And I think the reason for it is that to my mind, there was an element of something like cowardice mixed into my response.

Geoff Anders:
Really?

Anna Salamon:
Where it wasn't just my own authentic response to you seeming to me to be philosophically wrong about a thing. Sorry, and I don't remember the event that you're talking about-

Geoff Anders:
No, no, that's fine. That's fine.

Anna Salamon:
So I'm thinking of other events that are sort of in the same reference class. But to my mind, it was partly that I didn't want the group to reject me. And that this was sort of part-

Geoff Anders:
You mean, your group?

Anna Salamon:
My group. I didn't want my group to reject me.

Geoff Anders:
Okay.

Anna Salamon:
And so I was participating in having more of a "that's weird and I need to keep it at arm's length" reaction than I would have had in my own person. As opposed to a "That's interesting! What's with a smart person like you seeming to believe something dumb! Let's have a conversation!"

Geoff Anders:
Right.

Anna Salamon:
Which is an entirely different reaction that doesn't have the dissociating arm's-length this-guy's-a-weirdo component.

Geoff Anders:
Okay. That's-

Anna Salamon:
Like, when you say "written off", it's more like that, I think [inaudible]-

Geoff Anders:
Yeah. No, that's super interesting because, I mean... Some of the people in the chats are talking about docking points or writing people off on philosophical grounds. I think there's something like it. I think it can make sense to dock people points. Maybe in some cases writing off? It's, like, a bit extreme, but I could... But there's something like... There's, like, a broader... I'm sort of responding to people in the chat also, but it's like... There's just this large corpus of philosophical works and texts from the history... You know, people are going to be writing me off for dualism. Note, I'm not even a dualist. But it's something like, I think it's usually not correct to see something like a point-sized view or something like that, and then write something off.

I also think there can be trends, but it's something like... it would be good to crystallize what exactly is wrong. Like, if you could make a strong argument from... It's like when people say, "Oh, well, so-and-so believes in God, therefore they're bad intellectually or something." And you're like, "What about Newton?" And they're like, I don't know, something.

It's like, if you can draw a strong link between "believes X" and "has various other features" then, like, sure. But I think that empirically that's frequently not the case.

And even if there are general trends, you can recognize when there's something different happening with a person and look into it more. So. Yeah.

But I appreciate the thing you said. I mean, I, yeah. It's... Why did the group need to write off, like whatever, I mean, we got...

Anna Salamon:
Yeah-

Geoff Anders:
Robby Beninger said "Cartesian", so, like, Cartesianism is really bad. And like, I know dualism was a really big deal. Like... why?

I mean, it is cool sort of in a way to have communities differ on the basis of really basic philosophical propositions, because, like, at least aesthetically you're like, "Oh, you know, there's deep philosophical commitments." But I feel like we should be able to go meta and notice them and be like, "Okay, deep, different philosophical commitments-"

Anna Salamon:
Well-

Geoff Anders:
Yeah, go ahead.

 

11. (2:49:42) The value of looking at historical details, and narrative addiction collapse

fiddlemath: Anja notes a significant incident at the EA summit, 2014, where Geoff and Anna got into a ... pretty heated? ... argument about dualism. Seemed very emotional to both parties, given the degree of abstraction of the object-level conversation.

habrykas: Yeah, that was a big deal

habrykas: It caused me to have nightmares, and was a big component of me distancing myself from Leverage

habrykas: (Nightmares because I was interning at Leverage at the time, and it made me feel very alienated from my environment)

habrykas: (And felt like some kind of common ground was pulled out from under me)

DaystarEld: "(Nightmares because I was interning at Leverage at the time, and it made me feel very alienated from my environment)" - /hug

LuliePlays: "On demons: Look, I think Leverage's internal phrasing around it is toxic BS that is kinda bad for some people. I personally have never had anyone flip-their-shit when I explain social contagion, so feel free to reach out for phrasing-for-normals help if you want." - yeah I love Leverage's psych tech from what I've experienced of it (workshops), it just seems like it would be odd if those weird-sounding things were not explicitly addressed + I'm curious how Anna views the surrounding context of that post

z1zek100: A bunch of people were obsessed with that conversation for whatever reason. Will McAskill repeatedly cited it as his most definitive critique of Leverage for example.

z1zek100: I was there and I really never followed the strength of the response.

habrykas: I think that's kind of fair. I do think the conversation exposed some really big philosophical holes.

z1zek100: (This is Kerry btw)

larissa24joy: I'd find it really helpful to understand the emotional charge that seems to be associated with that (dualism) conversation for people?

habrykas: I think naturalistic reductionism is a really major foundation for my epistemology, and I do think it implicitly supports a lot of other common assumptions, and whenever I did into a bunch of my beliefs and plans, I do think a lot of them are pretty based on naturalistic reductionism

LuliePlays: and I'm finding myself wondering why they're talking about the past stuff (which I find interesting) when there's questions about the recent stuff and what CFAR's relationship to Leverage is now

fiddlemath: Anja says she'd like to try to explain...

habrykas: And I think it's pretty sensible. I do indeed think that an epistemology without naturalistic reductionism feels a lot less grounded, and I feel a lot more scared of having a group of people who "take ideas seriously" without being grounded in that way

larissa24joy: I'd love to hear from Anja on that if she didn't mind and we had time?

sowgone: I think Cartesianism is kind of a hard-to-escape trap with lots of implications, like a dangerous meme that affects a lot of other stuff. I don't think it's morally bad to be Cartesian (you got trapped!), but I think there are so few Cartesians + it's so bad, that I tend to write people off. Like a smart very devout Christian

AgileCaveman: imo, people tending to write each other off is a common pattern. I feel like i have been written off because of my theories of social contagion. I also wrote people off because of a rejection of biological substrate

habrykas: Agree that people trying and tending to write each other off is a common and kind of bad pattern

fiddlemath: The way I would have known not to reveal myself socially as a cartesian/dualist has to do with the tone and volume of Eliezer's writing on the topic

sowgone: (BTW as a *separate* thing, I did the 'Leverage is low-status so I want to avoid them', which I don't endorse and is bad.)

fiddlemath: Plus I remember something in the sequences about epistemic spotchecks being good in particular on philosophical topics

zwydswr: "I think naturalistic reductionism is a really major foundation for my epistemology, and I do think it implicitly supports a lot of other common assumptions, and whenever I did into a bunch of my beliefs and plans, I do think a lot of them are pretty based on naturalistic reductionism" - how is it a foundation for your epistemology?

z1zek100: @habrykas: That makes sense to me. I think I had a reaction that was something like "that seems wrong to me, but I don't understand why Geoff thinks it and there's probably an interesting story there at least" which involved not being very threatened about it in some way. I guess I generally expected people to have that reaction, but it could be that I don't have the same emotional valence on this topic or something.

 

Anna Salamon:
Yeah. I would also... Sorry, Larissa and the chat, yeah, I'm also pretty interested if anyone on the chat wants to explain about the emotional charge that it had for you personally - or, sorry, not for you personally but for the person in the chat who might explain.

Yeah, people are wondering about the current stuff. Lulie, I think I'm talking about the past stuff partly because Geoff mentioned in pre-call wanting to focus on the past stuff and partly because it's genuinely easier.

Geoff Anders:
Oh, we also, the thing is...

Anna Salamon:
Right?

Geoff Anders:
We also - we're at, like - we're at the 1:30 mark. I mean, so we did say we'd go sort of this long. I'm willing to go a little bit longer, but maybe we should shift gears and be like, "What's the way forward?" Because - is there a way forward, you know...

Anna Salamon:
I... I'm... yeah. I think we're all hurtling forward in time. So there's clearly some ways forward-

Geoff Anders:
Love it.

Anna Salamon:
I don't know toward where. I mean, personally, I'm really into history and conversation and sharing all the details. And I think once we do that, we'll find a real way forward to somewhere good. And if we don't do that, I don't trust our ability to reason or get anywhere.

I also think sharing all the details is the opposite of narrative addictions. Or, like, it'll help dismantle them. And I think it's in fact compatible with morale or with the good parts of morale, in the long run. I may be wrong.

I could be wrong, but I don't think I'm wrong.

Geoff Anders:
I think I agree with like 80% or 90% of that or something. It's like, I think there are some things... It's not like everything needs to be hashed out. I think there are problems other than ones addressed by the history. But I do think history is really undervalued, to put it that way. And I've gotten value from looking into some of the things. Yeah, I...

Anna Salamon:
I mean, I think we should-

Geoff Anders:
Go ahead.

Anna Salamon:
Yeah. I guess I'm of the hopeful-to-my-mind opinion that many of the things I've been calling "narrative addictions" and "narrative pyramid schemes" and so on are sort of on the point of collapse, or have already begun to collapse. And that maybe we're heading into a place where we can have real conversations across people, again.

Geoff Anders:
I'd love that. I think that'd be great.

Anna Salamon:
Time based more in a grounded thing and less in a "I'll support your crazy scheme if you support mine" kind of deal that I don't think it's how we build, in the real world for the actual situation.

 

12. (2:52:23) Geoff wants out of the rationality community; PR and associations; and disruptive narratives

Geoff Anders:
Yeah... Something like that seems generally good. But I do want to say something like, there's a way that I feel like I'm... You know, I sort of want to say, like, "How do I get out of the rationality community?"

Like, I've never considered myself a member. I don't read LessWrong. People are like, "Did you see the whatever?" and the answer is like, "No, no I didn't." And I'm running an organization and we're going to do all sorts of other stuff. And so it's not that I don't see value. There is potential value. But I... it's like the whole thing is sort of a nightmare in a certain way.

Like, how many people inside the rationality community - like, sum up, do an integral or something, on their attitudes towards me; and it's like, that doesn't give them a right to actually interact... Like, the fact that someone's mad at me doesn't by itself cause it to make sense for me to engage.

I do think some of the things are my fault. Absolutely. I - theories about various things... Like, sounds like I said some bad stuff about you in the past in a way that you didn't find toward. Like, I...

Anna Salamon:
Well, but sorry. I was mentioning that partly as an example of ways that I used to be doing a thing that I think was not that great.

Geoff Anders:
Well-

Anna Salamon:
I had mixed feelings about it.

Geoff Anders:
No, but I guess what I mean to say is, like, you have a claim or a stake. And then maybe the rationality community has some stake.

Like, definitely one thing I've read a little bit about was, seemed like people were doing a memory consolidation sort of event. Like, I think that's great. I would love the rationality community to do more of X, whatever that is.

And then I feel like it makes total sense for you to want to go into all the details, because you're much more closely tied to the rationality community. You're part of it, a leader, et cetera. I don't know if you would go for leader, but "leader", definitely.

How do I set up my relationship properly so that, like... I don't want people thinking that I'm going to, like... Like, there was one comment somebody directed me to - actually, I think, was it Spiracular? I think Spiracular's here in the chat.

Anyway, there was some comment that was like, the person's like, "I'm going to make a stand. There's all this fear, and somebody needs to stand up. And so it might as well be me." And I'm like, "Well, that's good." You know, I like that. But it's really different than my reality.

And so I don't really know how to navigate this sort of circumstance. Do you have any thoughts?

 

spiracularity: If you critique the rationalists, you are one of them. :P

spiracularity: It's how we roll.

sowgone: BTW I'd love to have a LW thread/post where we can argue about Cartesian (esp. 'trust first-person stuff a lot') stuff

AgileCaveman: you used rationality to critique rationality, it's inside of you now. A classic blunder

sowgone: ('write off' for me doesn't mean 'war' / 'don't talk', it means they're a friend I don't expect to do amazing things)

benitopace: "you used rationality to critique rationality, it's inside of you now. A classic blunder" - hahahaha

sowgone: :o

z1zek100: "you used rationality to critique rationality, it's inside of you now. A classic blunder" - NotLikeThis



Anna Salamon:
Sorry. So I think you just said, "How do I get the rationalist to be bored by me, or to otherwise leave me alone?"

Geoff Anders:
It's not "be bored by me". It's like - I think Spiracularity just said, "If you critique the rationalists, you're one of them"! Yeah. That's a viewpoint. Not one I endorse...

Anna Salamon:
Yeah, to my mind you seem very obviously part of this broader community. Partly because, yes, you critique. Partly because you recruit from. Partly because-

Geoff Anders:
Well, "critique or recruit from" - like, but when? Like, what years? Like, not Leverage more recently, right?

Anna Salamon:
Yeahh.

Geoff Anders:
And like- Yeah.

Anna Salamon:
I don't have a good solution for you here.
 

DaystarEld: To me "Why is dualism bad?" can be answered pretty easily with something like "It can reliably lead to people doing things like demonic seances." I'm not trying to be fighty by saying that, it just seems like a pretty straightforward relationship in my experience/observations of distinguishing philosophies that are more or less likely to go down dangerous paths

DaystarEld: (I get that "demons" can just be a language handle on psychology stuff, but what was described is much more than just a semantic handle)

z1zek100: "To me 'Why is dualism bad?' can be answered pretty easily with something like 'It can reliably lead to people doing things like demonic seances.' I'm not trying to be fighty by saying that, it just seems like a pretty straightforward relationship in my experience/observations of distinguishing philosophies that are more or less likely to go down dangerous paths" - That kind of implies that once you believe in dualism all bets are off or something and I don't quite see why that ought to be the case.

habrykas: "I think naturalistic reductionism is a really major foundation for my epistemology, and I do think it implicitly supports a lot of other common assumptions, and whenever I did into a bunch of my beliefs and plans, I do think a lot of them are pretty based on naturalistic reductionism" - My guess is this would take a bit longer to explain. Short answer is something like "It's what allows me to have a shared foundation for all the different methods of knowledge gathering that I use, from emotions to scientific experiments to expert deference, and is something like the foundation of how I sanity-check different parts of my map against each other"

329zwydswr: how was it different than a semantic handle and how is that related to dualism?

AgileCaveman: i am SUPER interested in learning more about demonic seanses. I am not at all turned of by the label "demons." I am interested in "did they work?"

habrykas: Also, like, a bunch of your staff was like working at CEA until 2017

larissa24joy: 2 of the 4 staff (me and Kerry) were at CEA until very early 2019.

spiracularity: I was making a pun off of the fact that we use the term for both the philosophy stance, and the community, though. (But also the community kinda loves its best critics.)

habrykas: So at the very least you shouldn't be surprised to be understood as part of the community if you are literally sharing staff with one of the central community organizations

habrykas: Sorry, meant 2019

spiracularity: Like, the fact that these are different didn't escape me.

spiracularity: (end explaining the joke)

larissa24joy: So if you're ever involved in EA or Rationality you can never leave?


Geoff Anders:
Okay. Well, I beseech the powers that be inside of rationality - Anna Salamon, Oliver Habryka, others - Robby Bensinger, et cetera, Ben Pace: I'd like some sort of reasonable exit strategy. I think you guys don't, like... I just get crazy messages where it's like, people don't want me anywhere around and then they don't want me to leave. That's not cool. That's, you know-

Anna Salamon:
The same people, or different people?

Geoff Anders:
I, you know, I'm not even in it enough to know. Okay. And so-

Anna Salamon:
Well, sorry, did you get a message from some person Bob who's both saying "stay around" and "get out", or do you get a message from Bob saying "stay around" and Carol saying "get out" or whatever?

Geoff Anders:
I think that... I mean, I'm loathe to point to particular individuals...

Anna Salamon
You don't have to name them.

Geoff Anders:
... But yeah, I think I'm getting mixed messages from at least one person! And it's something like... There should just be - I mean, this gets to like a thing that I think is a bigger issue with the rationality community. Like, there's all this stuff about Leverage managing its PR - like, PR matters in the actual world. Like, we want to do projects with people. They don't want to - you know, it diminishes the likelihood of their wanting to be associated with you if-

Anna Salamon:
Yeah, I... have an important disagreement with you here that would take-

Geoff Anders:
No, I agree it would take a while, but something thing like: How do we... I think there's some way that people want, like... I want something a bit more concrete. I want somebody to figure out what I'm supposed to do, and I want to have something I can interact with.

 

sowgone: I'd love to see a Geoff-reply to Anna's https://www.lesswrong.com/posts/SWxnP5LZeJzuT3ccd/pr-is-corrosive-reputation-is-not [LW · GW]

benitopace: "I'd love to see a Geoff-reply to Anna's https://www.lesswrong.com/posts/SWxnP5LZeJzuT3ccd/pr-is-corrosive-reputation-is-not" [LW · GW] - +1

habrykas: As long as you have power over people, they will want you to be held accountable somehow, and want you to play by their norms. And you've had large amounts of power over people for many years.

z1zek100: Over Rationalists specifically?

habrykas: And EAs

z1zek100: That could be true, but it would be a bit surprising

spiracularity: "As long as you have power over people, they will want you to be held accountable somehow, and want you to play by their norms. And you've had large amounts of power over people for many years." - +1

habrykas: I mean, Leverage shared staff with CEA for like 4 years, and was intimately involved with almost all EA Global conferences.

habrykas: So why would it be surprising?


Geoff Anders:
Look, if it's a mob, if there's no way to stop it, right, like, people, I would appreciate a little bit of clarity on that. We could have you and Habryka and other people just get together and issue an official formal statement that says the rationality community is going to continue doing this until they happen to personally feel satisfied. And then I can let my people I work with know that I've got this rival/enemy that is planning to constantly interact with me and there's really not a way out. And then I can at least be transparent to them about that.

Anna Salamon:
Ah. Yeah, sorry, I feel like there's so many threads and-

Geoff Anders:
Yeah, no, I mean it's - and I don't mean to say that I wouldn't put in time to addressing problems or concerns and so forth. But there's some people for whom the rationality community, like, is the world, and, like, that's the thing they care about. And I think that's fine. I think that's, like - people have communities. Communities seem fine.

Anna Salamon:
Yeah, sorry, you're adding more and more to your thing. [crosstalk]

Geoff Anders:
But - Okay, cool. So it's too hard, alright, alright. Well, it seems like there's - so, we didn't manage-

Anna Salamon:
Oh, I'm going to try to reply anyway, I was just [crosstalk]-

Geoff Anders:
Go ahead. Sure, alright.

Anna Salamon:
-how it's too many threads to reply to you and you were like, "Here's some more threads!" And I'm like, "Okay, great."

Geoff Anders:
Alright.

Anna Salamon:
"Here my new preface again." "Here's some more threads!" And I'm like "Gaahhh!"

Geoff Anders:
I'm happy to let you say a thing.

Anna Salamon:
I don't know. It seems to me like there's a bunch of different things going on. So, personally I feel like Zoe writing that public essay is sort of an invitation for me to take an interest in what happened to that part of Leverage, and an invitation to a bunch of other people.

Geoff Anders:
Yeah.

Anna Salamon:
An invitation from Zoe. Like, otherwise I'd be like, "Oh, well, that's not exactly my business." Like, I don't know whether I would or wouldn't be that way; but I feel like somebody who was in Leverage if 2018, 2019, sort of invited me and a bunch of other onlookers in, in my opinion.

Geoff Anders:
I agree with that. I accept that.

Anna Salamon:
So that's part of it. Then there's a different part that is something like: Everybody has the right to free speech and free thought. Particularly LessWrongers. And I'm not going to be trying to rein them in, and probably neither is anybody. And if they do, I'm probably going to oppose them. And that might get in the way of your PR, as you put it. And I actually feel bad about that, but I don't think it's my problem. Or, like, I don't plan to take any interest in it. (Like, I'm interested. I get to watch!)

Geoff Anders:
No, I think that's fine. The-

Anna Salamon:
And then there's a different thing, which is... Okay, here's my most controversial belief on this topic. Mostly not vis-à-vis you.

So let's talk about Eliezer for a second. Eliezer has huge communities of people who hang out complaining about him all the time. Not really, that exaggerates slightly - but, like, Sneer Club and so on.

Geoff Anders:
I was thinking about this. Yeah. How does Eliezer get out if he wants to?

Anna Salamon:
He can't! Sorry, he could, but he won't be able to. Here's my story about why it's happening to Eliezer.

Geoff Anders:
Okay.

Anna Salamon:
As I said, this is my own - I don't know, I haven't put this take out there before. I don't know if it's right.

Like, it seems to me that Eliezer... Let's talk about narratives. Lots and lots of people have personal narratives that are something like, "I'm going to be a school teacher and then I'll help a bunch of kids and that's good. And that makes me a good person and means I did something useful with my life." Or whatever. People have all kinds of, like, little local narratives.

Eliezer put these sequences out there, and they're just, like, a massive vortex that disrupts a huge number of people's local narratives.

Geoff Anders:
Right.

Anna Salamon:
And then some of those people hang out and get really upset.

Geoff Anders:
Yes.

Anna Salamon:
And this is the sense in which... And there's a co-dynamic here... I don't know. Look, I'm probably going to go to various kinds of Said Something Wrong Out Loud On The Internet In Front Of Fifty People Hell for this remark. But, like, it seems to me that Eliezer's narratives... I, sorry, I love and admire Eliezer more than almost - like, he's one of my favorite people in the whole world. But also, I imagine when I read the sequences that they've got kind of an edge, and that it's not just a presentation of facts, that there's sort of a reinforcement of his own narrative in there; I could be wrong about this.

Geoff Anders:
Yeah.

Anna Salamon:
And I think the reinforcement of his own narrative is sort of part of why it erodes other people's narratives so much.

Geoff Anders:
Yeah.

Anna Salamon:
And it's part of why a bunch of them then try to get vengeance.

Geoff Anders:
Yeah. Yeah, yeah. Totally agree.

Anna Salamon:
I think there's something a little like that about you and the rationality community that's happening on a somewhat smaller scale. And this is part of what's up with it. Although I think the thing about "Did Leverage actually harm some people, and were we invited into that room to try to help sort it out and make sure that there's, like-"

Geoff Anders:
Yeah, yeah. Right.

Anna Salamon:
"-restoration of things that those people might want or need," is a different and pretty legitimate point.

So those are the three things that I have.

Geoff Anders:
Okay. Yeah. Well, so, I agree with the harms and being brought in-

Anna Salamon:
Oh wait, sorry.

Geoff Anders:
Oh, go ahead.

Anna Salamon:
There's a fourth thing I really want to say. I'm sorry. Is that okay?

Geoff Anders:
Yeah.

Anna Salamon:
The fourth thing is: Rationality was never trying to be an organization in the way that Leverage is trying to be an organization. At least, I don't think so. Like, I think if I had been like, "Hey, Geoff, your Leverage is messing up my life. How do I get out?" You would have tried to be the responsible person at the front who could have had an answer for me.

Geoff Anders:
Right.

Anna Salamon:
Rationality is really not trying to be the kind of thing that can have a responsible person at the front who has an answer for people.
 

DaystarEld: "how was it different than a semantic handle and how is that related to dualism?" - From my understanding of Zoe's post... if people treat semantic handles for psychological phenomenon as real enough to be affected by using crystals, it starts to leave the realm of semantic handles and enter the realm of irrationality

sowgone: motte: 'Geoff, stop doing abusive actions X, Y, Z.' bailey: 'Geoff, stop being weird. And given that you've ever been weird, stop trying to improve the world.' ?

329zwydswr: "It's what allows me to have a shared foundation for all the different methods of knowledge gathering that I use" - ah ok this makes sense. hard to discuss here, but i want to note that this i think is actually compatible with some kinds of dualism. eg saying "beliefs aren't material things" (or maybe easier to agree on would be, "algorithms aren't material things") is compatible with "everything that happens, happens in accord with physical law"

habrykas: "ah ok this makes sense. hard to discuss here, but i want to note that this i think is actually compatible with some kinds of dualism. eg saying 'beliefs aren't material things' (or maybe easier to agree on would be, 'algorithms aren't material things') is compatible with 'everything that happens, happens in accord with physical law'" - Yeah, agree, though with many many caveats. I have a tiny steelman in my mind that's trying to map between Tegmark 4, Platonic Realism and Cartesianism. Though I've never gotten it to actually fully typecheck and make work.

329zwydswr: "as real enough to be affected by using crystals." - well... this is clearly a mistake, but does this really argue against they (the psychological phenomena) being really real?

329zwydswr: like, is talking about "having beliefs" a form of dualism? seems fairly dualistic, in that beliefs aren't material things. doesn't have to be irrational / nonsensical / crazy

DaystarEld: "well... this is clearly a mistake, but does this really argue against they (the psychological phenomena) being really real?" - I think there's a certain type of thinking that *leads* to those mistakes, and we should know that by now

329zwydswr: "Though I've never gotten it to actually fully typecheck and make work." - this problematic seems to be something in the air, there's maybe an idea here who's time is coming.

AgileCaveman: One thing that's somewhat frustrating about the question of "did leverage harm people" is the misunderstanding of "harm base rates". I have a couple friends severly harmed by rationality or working at normie tech companies. there is a high baseline of people being harmed in the world and the question of whether Leverage is above this baseline is harder than "did it harm people"

habrykas: Agree with this. I think establishing baselines is super important.

329zwydswr: "a certain type of thinking agreed" - i'm curious wat it is, and it doesn't seem like dualism fits, but maybe i'm missing the point.

DaystarEld:  "One thing that's somewhat frustrating about the question of "did leverage harm people" is the misunderstanding of 'harm base rates'. I have a couple friends severly harmed by rationality or working at normie tech companies. there is a high baseline of people being harmed in the world and the question of whether Leverage is above this baseline is harder than 'did it harm peopl'e" - Agree with baselines too. But one of the reasons (for example, sorry to use the C word) cults are bad is that they can do targeted harm in excess to baseline organizations to a few people beyond what a randomly selected control group might

DaystarEld: Even as most people have positive or neutral experiences


Geoff Anders:
Okay. I've got some thoughts. We still have viewers, so we're still going.

So, first I should say, I do think that there is a legitimate public interest in the things that Zoe said. I think that those be pursued. I think the people that want to pursue them are doing the right thing, or doing an acceptable thing. And so I'm not trying to push back on that at all.

On the "Eliezer created the narrative and is still reinforcing it, and then the people are stuck inside it and it erodes their narratives thing," if there's a thing that's similar that I'm doing, I want to find it and stop. I hereby offer that I will spend time doing that. And I would love to have one or more interlocutors who could help to try to crystallize what it is, so as to make this process more effective. I don't know if Eliezer believes that something like that's happening. I find it plausible that something like this is happening, even though the interactions with me are obviously less thoroughgoing than with Eliezer from the rationality community or from rationalists.

So, but I hereby offer to try to resolve that. And you know, I'm not sure that it's there, but if it's... I'll look, I'll do this myself. I don't need other - and I would like help from relevant other people who maybe have a bit more insight into how people are thinking about it, because one of the things that's harder for me is I don't in fact track the hive-mind of LessWrong, if it is a hive mind, or the non-hive-mind of many, many individuals who... something, like, I don't know.

But, yeah, so that sounds good. And then - you said a fourth thing that seemed super important. What was the fourth thing?

Anna Salamon:
Not trying to be that kind of thing, that Leverage I think was trying to be, where there's somebody at the front who can take calls.

Geoff Anders:
Ah, so here's-

Anna Salamon:
CFAR maybe is, LessWrong maybe is, but, like, the rationality community, no.

Geoff Anders:
Well, but this is where I think we have a disagreement, but one that's not like a deep philosophical disagreement about Cartesianism or something like that. It's a disagreement about social technology.

I think that the correct form for communities involves community leaders and involves walls of particular types. I think that that's actually from the nature of a community. I think communities - I'm not sure about this, but I think communities come together over shared strengths and shared weaknesses. The community offers an opportunity for the people to, like, collect and grow the shared strengths, and to help protect the people from the shared weaknesses.

And then I think that that's - like, the reason you need walls is because there is the shared weakness and the walls help to protect the community. And then, in terms of leaders, I think that communities, because there are certain types of shared weaknesses and blind spots, they're really subject to predators.

And so I think something like "Communities need to have leaders," and the challenge of the leader is to do their best to try to handle the predators.

But this is a claim about the nature of communities. And as a plug for my Twitch stream, I'm going to be talking about coordination research, which includes looking at different forms of social technologies, including communities. So I'm going to be exploring this.

But I think that there's actually, like, a positive claim here, not a normative claim, not a philosophical-whatever claim, there's a claim about a particular social form and its correct thing. And so maybe there's a fruitful line of discussion to have there.

Anna Salamon:
Yeah. I'm pretty into this conversation.

Geoff Anders:
Great. All right. Well, I feel like we had a really natural breaking point. All right. It took a little bit longer, and I apologize to everybody for the technical difficulties. Those will get better.

Okay. I think this was fine, right? All right. Good. Okay! Awesome. Thanks, everybody.

I will see... Maybe people will be less interested in my philosophical whatever. I don't know how we're gonna do better. This was good. Anyway, whatever. [crosstalk] I'll send out a tweet. All right. I'll-

Anna Salamon:
I appreciate you chatting.

Geoff Anders:
Great. All right. I'll see you later-

Anna Salamon:
And all the people in the text.

 

z1zek100: This seemed good

sowgone: this conversation was amazing, thank you

z1zek100: Love you both

329zwydswr: thanks

habrykas: I appreciate this conversation happening.

larissa24joy: Thank you for doing this Anna and Geoff!

LuliePlays: Thanks Geoff and everyone!

Turbowandray: bye

habrykas: Sorry for not being able to fix the audio things

linacalabria: +1

Turbowandray: ty!

WaistcoatDave: see you a

z1zek100: <3

DaystarEld: Thanks everyone

WaistcoatDave: see you all



Geoff Anders:
All right-

Anna Salamon:
Bye.

Geoff Anders:
Bye everybody.

97 comments

Comments sorted by top scores.

comment by romeostevensit · 2021-11-08T18:49:31.574Z · LW(p) · GW(p)

A dark pattern that I and many others unintentionally instantiate is overloading people's working memory with more considerations than the other person can keep track of at which point they tend to become deferential on the topic in question and with repetition become deferential in general. This is a terrible dynamic that will reinforce itself if efforts are not made to push against it IME. In practice what happens is that most people bounce off the person doing it, a few stick to the person doing it, the people who might critique it have left so the people sticking around now have an environment that agrees that deferring to this person makes sense.

This feels related to what is mentioned about leaders not talking and the prior art of 'Gurus tend to be allergic to one another.'

I'd also just like to note that I am overwhelmingly in favor of more public philosophy discussions.

Replies from: pktechgirl, RobbBB, SaidAchmiz, Benito
comment by Elizabeth (pktechgirl) · 2021-11-10T01:55:02.388Z · LW(p) · GW(p)

This is great and deserves a full post (ideally one incorporating the fact that reality is often more complicated than can fit in working memory).

comment by Rob Bensinger (RobbBB) · 2021-11-08T19:31:34.503Z · LW(p) · GW(p)

I'm reminded of a passage on 'teaching information versus arguing claims' in Paul Veyne's "Did the Greeks Believe Their Myths?":

Myth is information. There are informed people who have alighted, not on a revelation, but simply on some vague information they have chanced upon. If they are poets, it will be the Muses, their appointed informants, who will tell them what is known and said. For all that, myth is not a revelation from above, nor is it arcane knowledge. The Muse only repeats to them what is known—which, like a natural resource, is available to all who seek it.

Myth is not a specific mode of thought. It is nothing more than knowledge obtained through information, which is applied to realms that for us would pertain to argument or experiment. As Oswald Ducrot writes in Dire et ne pas dire, information is an illocution that can be completed only if the receiver recognizes the speaker's competence and honesty beforehand, so that, from the very outset, a piece of information is situated beyond the alternative between truth and falsehood. To see this mode of knowledge function, we need only read the admirable Father Huc's account of how he converted the Tibetans a century and a half ago:

"We had adopted a completely historical mode of instruction, taking care to exclude anything that suggested argument and the split of contention; proper names and very precise dates made much more of an impression on them than the most logical reasoning. When they knew the names Jesus, Jerusalem, and Pontius Pilate and the date 4000 years after Creation, they no longer doubted the mystery of the Redemption and the preaching of the Gospel. Furthermore, we never noticed that mysteries or miracles gave them the slightest difficulty. We are convinced that it is through teaching and not the method of argument that one can work efficaciously toward the conversion of the infidel."

Similarly, in Greece there existed a domain, the supernatural, where everything was to be learned from people who knew. It was composed of events, not abstract truths against which the listener could oppose his own reason. The facts were specific: heroes' names and patronyms were always indicated, and the location of the action was equally precise (Pelion, Cithaeron, Titaresius . . . place names have a music in Greek mythology). This state of affairs may have lasted more than a thousand years. It did not change because the Greeks discovered reason or invented democracy but because the map of the field of knowledge was turned upside down by the creation of new powers of affirmation (historical investigation and speculative physics) that competed with myth and, unlike it, expressly offered the alternative between true and false.

comment by Said Achmiz (SaidAchmiz) · 2021-11-10T00:13:27.239Z · LW(p) · GW(p)

This [LW(p) · GW(p)] seems directly related.

Replies from: Duncan_Sabien, romeostevensit
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-10T02:47:51.715Z · LW(p) · GW(p)

I think there's a big difference between spoken word and written word.

I agree that both can be unwieldy or overwhelming, but in my culture it's much worse to overload someone in spoken word; in written word at least there's a record and at least they could, in theory, take their time to address every point.

Or from another perspective: if we agree this is dark artsy (and I more agree than disagree), it's much less dark artsy in writing.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-10T05:35:50.896Z · LW(p) · GW(p)

Yes, that is true.

comment by romeostevensit · 2021-11-10T08:37:45.099Z · LW(p) · GW(p)

Strongly agree with the trends mentioned and think (ironically given the subject matter) that there is much more fruit here in figuring out what is happening when attempts at technical explanation fail to render better predictions but instead have selection pressure for side effects in social reality.

comment by Ben Pace (Benito) · 2021-11-10T00:16:07.690Z · LW(p) · GW(p)

+1 to this pattern being obnoxious

comment by AnonymousCoward02 · 2021-11-12T21:15:34.396Z · LW(p) · GW(p)

A few things.

  1. I'm a high-karma LW member and I created an anonymous account to say this for reasons given below. Trust me on that or don't, my arguments should stand on their own.
  2. Way too much of this kind of self-obsessed community gossip has dominated LW in recent weeks. This stuff demands highly disproportionate attention and has turned LW into a net negative place for me to spend time on.
  3. This Leverage drama is not important to anyone except a small group of people and does not belong on LW. Perhaps the relatively small group of Bay Area rationalists who are always at the center of these things need to create a separate forum for their own drama. Nobody outside of Berkeley needs to hear about this. This sort of thing gets upvoted because tribal instincts are being activated, not because this is good and ought to be here.
  4. I have a much lower opinion about literally everybody even tangentially involved in this whole thing, even Anna Salamon for making the extremely bad PR choice of getting herself and her organization sucked into a completely avoidable vortex of bad publicity. At this point I am not sure that CFAR has created any value at all in recent years, all I know is that there are some vague and impossible to pin down connections to some extremely terrible-sounding people and situations. This is intended mostly as a statement, from an uninvolved bystander, about how bad the optics are here, and how much it's negatively impacted my own subjective impression of CFAR and the Bay Area rationality community at large.
  5. If you disagree with the above and really really feel like you need to post a top-level post about some kind of community drama, then please at least try to do a good job on it. Separately from the frequency of these posts is the issue of quality and volume. Duncan Sabien's multiple recent posts and this incredibly long and time-consuming transcript are extremely low-effort and low-quality, the former is badly written and the later is just a transcript. If you felt like you needed to post this you could have at least provided a short summary of major points so people could determine whether they needed to read it.
  6. You might say "Nobody is making you read it." That's true but misses the fact that gossip activates tribal reflexes that are very hard to fight. And anything with 100+ upvotes demands attention. I can't tell a priori that those upvotes are more about tribal solidarity than about quality and importance. I created an anonymous account because I want to just say this and then be allowed to stop thinking about it, and not get roped into the whole tribal signaling dynamic, which I resent. I know that there are other people like me because I have had this conversation with several other rationalists in person and we are uniformly annoyed yet "nerd-sniped" by this situation, yet none of us want to say anything because we don't want to get involved at all. Again, trust me or don't.
  7. This community is much bigger and more important than 15 or so high-drama high-disagreeability who live in the Bay Area and at this point I feel like those people need to spend less time posting about their social group and more time posting about rationality and stuff.
Replies from: grant-demaree, Benito, habryka4, RobbBB, Duncan_Sabien
comment by Grant Demaree (grant-demaree) · 2021-11-13T23:41:27.342Z · LW(p) · GW(p)

I don’t agree with the characterization of this topic as self-obsessed community gossip. For context, I’m quite new and don’t have a dog in the fight. But I drew memorable conclusions from this that I couldn’t have gotten from more traditional posts

First, experimenting with our own psychology is tempting and really dangerous. Next time, I’d turn up the caution dial way higher than Leverage did

Second, a lot of us (probably including me) have an exploitable weakness brought on high scrupulously combined with openness to crazy-sounding ideas. Next time, I’d be more cautious (but not too cautious!) about proposals like joining Leverage

Third, if we ever need to maintain the public’s goodwill, I’ll try not to use words like “demonic seance”… even if I don’t mean it literally

In short, this is the sort of mistake worth learning about, including for those not personally affected, because it’s the kind of mistake we could plausibly make again. I think it’s useful to have here, and the right attitude for the investigation is “what do these events teach us about how rationalist groups can go wrong?” I also don’t think posting a summary would’ve been sufficient. It was necessary to hear Geoff and Anna’s exact words

Replies from: grant-demaree
comment by Grant Demaree (grant-demaree) · 2021-11-14T01:02:16.489Z · LW(p) · GW(p)

In fact, what I’d really like to see from this is Leverage and CFAR’s actual research, including negative results

What experiments did they try? Is there anything true and surprising that came out of this? What dead ends did they discover (plus the evidence that these are truly dead ends)?

It’d be especially interesting if someone annotated Geoff’s giant agenda flowchart with what they were thinking at the time and what, if anything, they actually tried

Also interested in the root causes of the harms that came to Zoe et al. Is this an inevitable consequence of Leverage’s beliefs? Or do the particular beliefs not really matter, and it’s really about the social dynamics in their group house?

Replies from: Viliam
comment by Viliam · 2021-12-19T21:58:46.301Z · LW(p) · GW(p)

Probably not what you wanted, but you can read CFAR's handbook and updates (where they also reflect on some screwups). I am not aware of Leverage having anything equivalent publicly available.

comment by Ben Pace (Benito) · 2021-11-13T00:15:12.872Z · LW(p) · GW(p)

I appreciate you sharing your perspective. A lot of this is uninteresting and irrelevant to perhaps the majority of readers (though I think that as you weight users by karma you’d start to find for more and more of them this is directly about the social dynamics around them).

I’m pretty pro this discussion happening somehow for the communities involved, and think it’s been pretty helpful in some important ways for it to happen as it has in public.

I wonder if there’s a natural way for it to be less emphasized for the majority for whom it is uninteresting. Perhaps it should only be accessible to logged-in accounts at the time of posting and then public 6 months later, or perhaps it should be relegated to a part of the site that isn’t the frontpage (note we aren’t frontpaging it, which means at least logged out users aren’t seeing it).

If there‘s a good suggestion here I’d be into that.

comment by habryka (habryka4) · 2021-11-12T21:59:55.069Z · LW(p) · GW(p)

I think some of these are pretty reasonable points, but I am kind of confused by the following: 

This Leverage drama is not important to anyone except a small group of people and does not belong on LW. Perhaps the relatively small group of Bay Area rationalists who are always at the center of these things need to create a separate forum for their own drama. Nobody outside of Berkeley needs to hear about this. This sort of thing gets upvoted because tribal instincts are being activated, not because this is good and ought to be here.

It seems to me that Leverage had a large and broad effect on the Effective Altruism and Rationality communities worldwide, with having organized the 2013-2014 EA Summits, and having provided a substantial fraction of the strategic direction for EAG 2015 and EAG 2016, and then shared multiple staff with the Centre For Effective Altruism until 2019. 

This suggests to me that what happened at Leverage clearly had effects that are much broader reaching than "some relatively small group of Bay Area rationalists". Indeed, I think the Bay Area rationality community wasn't that affected by all the stuff happening at Leverage, and the effects seemed much more distributed. 

Maybe you also think all the EA Summit and EA Global conferences didn't matter? Which seems like a fine take. Or maybe you think how CEA leadership worked also didn't matter, which also seems fine. But I do think these both aren't obvious takes, and I think I disagree with both of them. 

Replies from: steven0461, Kenoubi, AnonymousCoward02
comment by steven0461 · 2021-11-12T22:57:31.103Z · LW(p) · GW(p)

"Problematic dynamics happened at Leverage" and "Leverage influenced EA Summit/Global" don't imply "Problematic dynamics at Leverage influenced EA Summit/Global" if EA Summit/Global had their own filters against problematic influences. (If such filters failed, it should be possible to point out where.)

comment by Kenoubi · 2021-11-13T22:12:53.917Z · LW(p) · GW(p)

I donate a meaningful amount to CFAR and MIRI (without being overly specific, >1% of my income to those two orgs), and check LW weekly-ish, and I had never even heard of Leverage until the recent kerfuffle. Anecdote isn't data but I sort of agree with this comment's grandparent here.

comment by AnonymousCoward02 · 2021-11-12T23:12:10.807Z · LW(p) · GW(p)

It seems to me that Leverage had a large and broad effect on the Effective Altruism and Rationality communities worldwide, with having organized the 2013-2014 EA Summits, and having provided a substantial fraction of the strategic direction for EAG 2015 and EAG 2016, and then shared multiple staff with the Centre For Effective Altruism until 2019. 

For me personally this still rounds off to "not very important." Especially in the sense that there is nothing I, or the vast majority of people on this site, could possibly do with this information. I was already never going to join Leverage, or give any money to Geoff Anders. I have a lot of rationalist friends, both IRL and online, and none of us had ever heard about Geoff Anders prior to this recent drama.

Think about it in terms of cost-benefit. The benefit of this kind of content to the vast majority of people on LW is zero. The cost is pretty high, because ~everybody who sees a big juicy drama fest is going to want to rubberneck and throw in their two cents. So on net posting content like this to the main LW feed is strongly net negative in aggregate. A post which is simply dumb/wrong but otherwise un-dramatic can at least be simply ignored.

I think that if it were, say, Yudkowsky being accused of auditing people's thetans and having seances, I would find that relevant, because it would have implications for my future decisions.

comment by Rob Bensinger (RobbBB) · 2021-11-14T00:41:50.366Z · LW(p) · GW(p)

What do you think of Anna's https://www.lesswrong.com/posts/SWxnP5LZeJzuT3ccd/pr-is-corrosive-reputation-is-not [LW · GW] ? (I don't know that I fully understand her view in that post, but it seems like a fruitful place to look for cruxes, given how much you talk about "PR" ad "optics" here.)

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-13T03:51:31.720Z · LW(p) · GW(p)

=P

Replies from: AnonymousCoward02
comment by AnonymousCoward02 · 2021-11-13T19:03:55.975Z · LW(p) · GW(p)

<3

Sorry. I was in a really shitty mood. That wasn't nice of me.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-13T19:10:40.585Z · LW(p) · GW(p)

<3 

I will note that I think it's completely valid to hold each of the following views:

  • My recent stuff is badly written
  • My recent stuff is on a topic we should spend less time on
  • My recent stuff made things worse

... I, like, hope those things aren't true, but they are worthwhile hypotheses.

comment by Annulus · 2021-11-09T04:57:53.506Z · LW(p) · GW(p)

This was an incredible read. Thank you for the transcription, Rob.

Habryka said: "As long as you have power over people, they will want you to be held accountable somehow, and want you to play by their norms. And you've had large amounts of power over people for many years." I think this is exactly correct. Geoff is trying to find some way to not be accountable for the ideas he put into the world, the organization that he built, and the outcomes that they had on the people he had direct power over. It's telling that he wishes to be absolved by a representative of the rationalist community, rather than the people who were harmed by him.

comment by Dojan · 2021-11-08T10:52:38.891Z · LW(p) · GW(p)

Thank you for uploading this. 

Please do upload any further conversations that take place (you or anyone). 

This feels like a good start, but there are many subjects left untouched. In fact, this feels like context rather than addressing the core issues brought up by Zoe Curzi and Jessicata and others.

Replies from: ChristianKl
comment by ChristianKl · 2021-11-08T11:30:52.445Z · LW(p) · GW(p)

https://www.twitch.tv/videos/1197855853 contains more related discussion.

comment by Annulus · 2021-11-09T05:00:57.638Z · LW(p) · GW(p)

Also, like some of the participants in chat, I am pretty curious about the significance of the "demons" and "objects" from Zoe's post. Is it just social contagion? The elevated language makes it seem like they developed some unique theories that a lot of intelligent people found persuasive (and harmful — at least in the social context of what sounds like a toxic workplace). Has any light been shed on this somewhere?

comment by Rob Bensinger (RobbBB) · 2021-11-10T17:14:07.016Z · LW(p) · GW(p)

The chat log above mentions stuff like:

fiddlemath: Anja notes a significant incident at the EA summit, 2014, where Geoff and Anna got into a ... pretty heated? ... argument about dualism. Seemed very emotional to both parties, given the degree of abstraction of the object-level conversation.

habrykas: Yeah, that was a big deal

habrykas: It caused me to have nightmares, and was a big component of me distancing myself from Leverage

And the transcript mentions:

Anna Salamon:
Actually I think there's something different, though. I think Leverage came - and by "Leverage" in the beginning, I mean you, Geoff - came in with a different pattern of something that I think a lot of people had an immune response to.

Geoff Anders:
Yeah. I agree.

Anna Salamon:
And a different pattern of something - it was partly the particular way that you weren't into a particular kind of materialism. The "pole through the head" thing - I can say this more slowly for people to follow it.

This is referring to the same incident, where (at the 2014 EA Summit, which was much larger and more public than previous Leverage-hosted EA events) Anna and Geoff were on a scheduled panel discussion. My recollection from being in the audience was that Anna unexpectedly asked Geoff if he believed that shoving a pole through someone's brain could change their beliefs (other than via sensory perception), and Geoff reluctantly said 'no'. I don't think he elaborated on why, but I took his view to be that various things about the mind are extraphysical and don't depend on the brain's state.

I had a couple of conversations with Geoff in 2014 about the hard problem of consciousness, where I endorsed "eliminativism about phenomenal consciousness" or "phenomenal anti-realism" (nowadays I'd use the more specific term "illusionism", following Keith Frankish), as opposed to "phenomenal reductionism" (phenomenal consciousness exists, and "every positive fact about experience is logically entailed by the physical facts") and "phenomenal fundamentalism" (it exists, and "some positive facts about experience aren't logically entailed by the physical facts").

Geoff never told me his full view, but he said he thought phenomenal fundamentalism was true, and he said he was completely certain that phenomental anti-realism is false.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-11-10T17:39:17.855Z · LW(p) · GW(p)

(cw: things I consider epistemic traps and mistaken ways of thinking about experience)

I'm the person in the chat who admitted to 'writing Geoff off on philosophical grounds' pretty early on. To quote a pair of emails I wrote Geoff after the Twitch stream, elaborating on what I meant by 'writing off' and why I 'wrote him off' (in that sense) in 2014:

[...]

  • My impression was that you put extremely high (perhaps maximal?) confidence on 'your epistemic access to your own experiences', and that this led you to be confident in some version of 'consciousness is fundamental'. I didn't fully understand your view, but it seemed pretty out-there to me, based on the 'destroying someone's brain wouldn't change their beliefs' thing from your and Anna's panel discussion at the 2014 EA Summit. This is the vague thing I had in mind when I said 'Cartesian'; there are other versions of 'being a Cartesian' that wouldn't make me "write someone off".
     
  • By "I wrote Geoff off", I didn't mean that I thought you were doing something especially unvirtuous or stupid, and I didn't mean 'I won't talk to Geoff', 'I won't be a friendly colleague of Geoff', etc. Rather, I meant that as a shorthand for 'I'm pretty confident Geoff won't do stuff that's crucial for securing the light cone, and I think there's not much potential for growth/change on that front'.

    [...]
     
  • I think you're a sharper, faster thinker than me, and I'd guess you know more history-of-philosophy facts and have spent more time thinking about the topics we disagree about. When I think about our philosophical disagreement, I don't think of it as 'I'm smarter than Geoff' or 'I was born with more epistemic virtue than Geoff'; I think of it as:

    Cartesianism / putting-high-confidence-in-introspection (and following the implications of that wherever they lead, with ~maximal confidence) is incredibly intuitive, and it's sort of amazing from my perspective that so few rationalists have fallen into that trap (and indeed I suspect many of them have insufficiently good inside-view reasons to reject Cartesian reasoning heuristics). 

    I'm very grateful that they haven't, and I think their (relatively outside-view-ish) reasons for doubting Cartesianism are correct, but I also think they haven't fully grokked the power of the opposing view. 

    Basically I think this is sort of the intuitively-hardest philosophy test anyone has to face, and I mostly endorse not subjecting rationalists and EAs to that and helping them skill up in other ways; but I do think it's a way to get trapped, especially if you aren't heavily grounded in Bayesianism/probabilism and the thermodynamic conception of reasoning.
     
  • So I don't think your reasoning chain (to the extent I understand it) was unusually epistemically unvirtuous -- I just think you happened on a reasoning style that doesn't empirically/historically work (our brains just aren't set up to do that, at least with much confidence/fidelity), but that is 'self-endorsing' and has a hard time updating away from itself. Hence I think of it as a trap / a risky memetic virus, not a sin.

    And: A large important part of why I did the 'writing off' update, in spite of your reasoning chain not (IMO) being super epistemically unvirtuous, is my rough sense of your confidence + environment. (This is the main thing I wish I'd been able to go into in the Twitch chat.)

    If I'd modeled you as 'surrounded by tons of heavyweight philosophers who will argue with you constantly about Cartesianism stuff', I would not have written you off (or would have only done so weakly). E.g., if I thought you had all the same beliefs but your day job was working side-by-side with Nick Bostrom and Will MacAskill and butting heads a bunch on these topics, I'd have been much more optimistic. My model instead was that Leverage wasn't heavyweight enough, and was too deferential to you; so insofar as Cartesian views have implications (and I think they ought to have many implications, including more broadly updating you to weird views on tons of other things), I expected you to drag Leverage down more than it could drag you up.

    I also had a sense that you weren't Bayes-y enough? I definitely wouldn't have written you off if you'd said 'I'm 65-85% confident in the various components of Cartesianism, depending on time of day and depending on which one you're asking about; and last year I was significantly less confident, though for the three years prior I was more confident'. (In fact, I think I'd have been sort of impressed that you were able to take such strange views so seriously without having extremal confidence about them.)

    What I'm gesturing at with this bullet point is that I modeled you as having a very extreme prior probability (so it would be hard to update), and as endorsing reasoning patterns that make updating harder in this kind of case, and as embedded in a social context that would not do enough to counter this effect.

    (If your views on this did change a lot since we last talked about this in Feb 2017 [when I sent you a follow-up email and you reiterated your view on phenomenal consciousness], then I lose Bayes points here.)

And:

Elaborating on the kind of reasoning chain that makes me think Cartesian-ish beliefs lead to lots of wild false views about the world:

1. It seems like the one thing I can know for sure is that I'm having these experiences. The external world is inferred, and an evil demon could trick me about it; but it can't produce an illusion of 'I'm experiencing the color red', since the "illusion" would just amount to it producing the color red in my visual field, which is no illusion at all. (It could produce a delusion, but I don't just believe I'm experiencing red; I'm actually looking at it as we speak.)

2. The hard problem of consciousness shows that these experiences like red aren't fully reducible to any purely third-person account, like physics. So consciousness must be fundamental, or reducible to some other sort of thing than physics.

3. Ah, but how did I just type all that [LW · GW] if consciousness isn't part of physics? My keystrokes were physical events. It would be too great a coincidence for my fingers to get all this right without the thing-I'm-right-about causing them to get it right. So my consciousness has to be somehow moving my fingers in different patterns. Therefore:

3a. The laws of physics are wrong, and human minds have extra-physical powers to influence things. This is a large update in favor of some psychic phenomena being real. It also suggests that there's plausibly some conspiracy on the part of physicists to keep this secret, since it's implausible they'd have picked up no evidence by now of minds' special powers. Sean Carroll's claim that "the laws of physics underlying the phenomena of everyday life are completely known" is not just false -- it is suspiciously false, and doesn't seem like the kind of error you could make by accident. (In which case, what else might there be a scientific conspiracy about? And what's the scientists' agenda here? What does this suggest about the overall world order?)

3b. OR ALTERNATIVELY: phenomenal consciousness doesn't directly causally move matter. In order for my beliefs about consciousness to not be correct entirely by coincidence, then, it seems like some form of occasionalism or pre-established harmony must be true: something outside the universe specifically designed (or is designing) our physical brains in such a way that they will have true beliefs about consciousness. So it seems like our souls are indeed separate from our bodies, and it seems like there's some sort of optimizer outside the universe that cares a lot about whether we're aware that we have souls -- whence we need to update a lot in favor of historical religious claims having merit.

Whether you end up going down path 3a or path 3b, I think these ideas are quite false, and have the potential to leak out and cause more and more of one's world-view to be wrong. I think the culprit is the very first step, even though it sounded reasonable as stated.

Replies from: kerry-vaughan, SaidAchmiz
comment by Kerry Vaughan (kerry-vaughan) · 2021-11-10T20:59:29.173Z · LW(p) · GW(p)

Rob: Where does the reasoning chain from 1 to 3a/3b go wrong in your view? I get that you think it goes wrong in that the conclusions aren't true, but what is your view about which premise is wrong or why the conclusion doesn't follow from the premises?

In particular, I'd be really interested in an argument against the claim "It seems like the one thing I can know for sure is that I'm having these experiences."

Replies from: RobbBB, TAG
comment by Rob Bensinger (RobbBB) · 2021-11-10T22:21:19.678Z · LW(p) · GW(p)

I think that the place the reasoning goes wrong is at 1 ("It seems like the one thing I can know for sure is that I'm having these experiences."). I think this is an incredibly intuitive view, and a cornerstone of a large portion of philosophical thought going back centuries. But I think it's wrong.

(At least, it's wrong -- and traplike -- when it's articulated as "know for sure". I have no objection to having a rather high prior probability that one's experiences are real, as long as a reasonably large pile of evidence to the contrary could change your mind. But from a Descartes-ish perspective, 'my experiences might not be real' is just as absurd as 'my experiences aren't real'; the whole point is that we're supposed to have certainty in our experiences.)

Here's how I would try to motivate 'illusionism is at least possibly true' today, and more generally 'there's no way for a brain to (rationally) know with certainty that any of its faculties are infallible':

_________________________________________________

 

First, to be clear: I share the visceral impression that my own consciousness is infallibly manifest to me, that I couldn't possibly not be having this experience.

Even if all my beliefs are unreliable, the orange quale itself is no belief, and can't be 'wrong'. Sure, it could bear no resemblance to the external world -- it could be a hallucination. But the existence of hallucinations can't be a hallucination, trivially. If it merely 'seems to me', perceptually, as though I'm seeing orange -- well, that perceptual seeming is the orange quale!

In some sense, it feels as though there's no 'gap' between the 'knower' and the 'known'. It feels as though I'm seeing the qualia, not some stand-in representation for qualia that could be mistaken.

All of that feels right to me, even after 10+ years of being an illusionist. But when I poke at it sufficiently, I think it doesn't actually make sense.

 

Intuition pump 1: How would my physical brain, hands, etc. know any of this? For a brain to accurately represent some complex, logically contingent fact, it has to causally interact (at least indirectly / at some remove) with that fact. (Cf. The Second Law of Thermodynamics, and Engines of Cognition [LW · GW].)

Somehow I must have just written this comment. So some causal chain began in one part of my physical brain, which changed things about other parts of my brain, which changed things about how I moved my fingers and hands, which changed things about the contents of this comment.

What, even in principle, would it look like for one part of a brain to have infallible, "direct" epistemic access to a thing, and to then transmit this fact to some other part of the brain?

It's easy to see how this works with, e.g., 'my brain has (fallible, indirect) knowledge of how loud my refrigerator is'. We could build that causal model, showing how the refrigerator's workings change things about the air in just the right way, to change things about my ears in just the right way, to change things about my brain in just the right way, to let me output accurate statements about the fridge's loudness.

It's even easy to see how this works with a lot of introspective facts, as long as we don't demand infallibility or 'directness'. One part of my brain can detect whether another part of my brain is in some state.

But what would it look like, even in principle, for one set of neurons that 'has immediate infallible epistemic access to X' to transmit that fact to another set of neurons in the brain? What would it look like to infallibly transmit it, such that a gamma ray couldn't randomly strike your brain to make things go differently (since if it's epistemically possible that a gamma ray could do that, you can't retain certainty across transmissions-between-parts-of-your-brain)? What would it look like to not only infallibly transmit X, but infallibly transmit the (true, justified) knowledge of that very infallibility?

This is an impossible enough problem, AFAICT, but it's just a warm-up for:

 

Intuition pump 2: What would it look like for even one part of a brain to have 'infallible' 'direct' access to something 'manifest'?

If we accepted, from intuition pump 1, that you can't transmit 'infallible manifestness' across different parts of the brain (even potentially quite small parts), we would still maybe be able to say:

'I am not my brain. I am a sufficiently small part of my brain that is experiencing this thing. I may be helpless to transmit any of that to my hands, or even to any other portion of my brain. But that doesn't change the fact that I have this knowledge -- I, the momentarily-existing locked-in entity with no causal ability to transmit this knowledge to the verbal loop thinking these thoughts, the hands writing these sentences, or to my memory, or even to my own future self a millisecond from now.'

OK, let's grant all that.

... But how could even that work?

Like, how do you build a part of a brain, or a part of a computer, to have infallible access to its own state and to rationally know that it's infallible in this regard? How would you design a part of an AI to satisfy that property, such that it's logically impossible for a gamma ray (or whatever) to make that-part-of-the-AI wrong? What would the gears and neural spike patterns underlying that knowing/perceiving/manifestness look like?

It's one thing to say 'there's something it's like to be that algorithm'; it's quite another to say 'there's something it's like to be that algorithm, and the algorithm has knowably infallible epistemic access to that what-it's-like'. How do you design an algorithm like that, even in principle?

I think this is the big argument. I want to see a diagram of what this 'manifestness' thing could look like, in real life. I think there's no good substitute for the process of actually trying to diagram it out.

 

Intuition Pump 3: The reliability of an organism's introspection vs. its sensory observation is a contingent empirical fact.

We can imagine building a DescartesBot that has incredibly unreliable access to its external environment, but has really quite accurate (though maybe not infallible) access to its internal state. E.g., its sensors suck, but its brain is able to represent tons of facts about its own brain with high reliability (though perhaps not infallibility), and to form valid reasoning chains incorporating those facts. If humans are like DescartesBot, then we should at least be extremely wary of letting our scientific knowledge trump our phenomenological knowledge, when the two seem to conflict.

But humanity's track record is the opposite of DescartesBot's -- we seem way better at sensing properties of our external environment, and drawing valid inferences about those properties, than at doing the same for our own introspected mental states. E.g., people are frequently wrong about their own motives and the causes of their behavior, but they're rarely wrong about how big a given chair is.

This isn't a knock-down argument, but it's a sort of 'take a step back' argument that asks whether we should expect that we'd be the sorts of evolved organisms that have anything remotely approaching introspective certitude about various states of our brain. Does that seem like the genre-savvy view, the view that rhymes more with the history of science to date, the view that matches the apparent character of the rest of our knowledge of the world?

I think some sort of 'taste for what's genre-savvy' is a surprisingly important component of how LW has avoided this epistemic trap. Even when folks here don't know how to articulate their intuitions or turn them into explicit arguments, they've picked up on some important things about how this stuff tends to work.

Replies from: RobbBB, kerry-vaughan
comment by Rob Bensinger (RobbBB) · 2021-11-10T22:23:12.020Z · LW(p) · GW(p)

If you want something that's more philosopher-ish, and a bit further from how I think about the topic today, here's what I said to Geoff in 2014 (in part):

[...]

Phenomenal realism [i.e., the belief that we are phenomenally conscious] has lots of prima facie plausibility, and standard reductionism looks easily refuted by the hard problem. But my experience is that the more one shifts from a big-picture 'is reductionism tenable?' to a detailed assessment of the non-physicalist options, the more problems arise -- for interactionism and epiphenomenalism alike, for panpsychism and emergent dualism alike, for property and substance and 'aspect' dualism alike, for standard fundamentalism and 'reductionism-to-nonphysical-properties' alike.

All of the options look bad, and I take that as a strong hint that there's something mistaken at a very deep level about introspection, and/or about our concept of 'phenomenal consciousness'. We're clearly conscious in some sense -- we have access consciousness, 'awake' consciousness, and something functionally similar to phenomenal consciousness (we might call it 'functional consciousness,' or zombie consciousness)  that's causally responsible for all the papers our fingers write about the hard problem. But the least incredible of the available options is that there's an error at the root of our intuitions (or, I'd argue, our perception-like introspection). It's not as though we have evolutionary or neuroscientific reasons to expect brains to be as good at introspection or phenomenological metaphysics as they are at perceiving and manipulating ordinary objects.

[...]

Eliminativism is definitely counter-intuitive, and I went through many views of consciousness before arriving at it. It's especially intuitions-subverting to those raised on Descartes and the phenomenological tradition. There are several ways I motivate and make sense of eliminativism:

(I'll assume, for the moment, that the physical world is causally closed; if you disagree in a way that importantly undermines one of my arguments, let me know.)

1. Make an extremely strong case against both reductionism and fundamentalism. Then, though eliminativism still seems bizarre -- we might even be tempted to endorse mysterianism here -- we at least have strong negative grounds to suspect that it's on the right track.

2. Oversimplifying somewhat: reductionism is conceptually absurd, fundamentalism is metaphysically absurd (for the reasons I gave in my last e-mail), and eliminativism is introspectively absurd. There are fairly good reasons to expect evolution to have selected for brains that are good at manipulating concepts (so we can predict the future, infer causality, relate instances to generalizations, ...), and good reasons to expect evolution to have selected for brains that are good at metaphysics (so we can model reality, have useful priors, usefully update them, ...). So, from an outside perspective, we should penalize reductionism and fundamentalism heavily for violating our intuitions about, respectively, the implications of our concepts and the nature of reality.

The selective benefits of introspection, on the other hand, are less obvious. There are clear advantages to knowing some things about our brains -- to noticing when we're hungry, to reflecting upon similarities between a nasty smell and past nasty smells, to verbally communicating our desires. But it's a lot less obvious that the character of phenomenal consciousness is something our ancestral environment would have punished people for misinterpreting. As long as you can notice the similarity-relations between experiences, their spatial and temporal structure, etc. -- all their functional properties -- it shouldn't matter to evolution whether or not you can veridically introspect their nonfunctional properties, since (ex hypothesi) it makes no difference whatsoever which nonfunctional properties you instantiate.

And just as there's no obvious evolutionary reason for you to be able to tell which quale you're instantiating, there's also no obvious evolutionary reason for you to be able to tell that you're instantiating qualia at all.

Our cognition about P-consciousness looks plausibly like an evolutionary spandrel, a side-effect shaped by chance neural processes and genetic drift. Can we claim a large enough confidence in this process, all things considered, to refute mainstream physics?

3. The word 'consciousness' has theoretical content. It's not, for instance, a completely bare demonstrative act -- like saying 'something is going on, and whatever it is, I dub it [foo]', or 'that, whatever it is, is [foo]'. If 'I'm conscious' were as theory-neutral as all that, then absolutely anything could count equally well as a candidate referent -- a hat, the entire physical universe, etc.

Instead, implicitly embedded within the idea of 'consciousness'  are ideas about what could or couldn't qualify as a referent. As soon as we build in those expectations, we leave the charmed circle of the cogito and can turn out to be mistaken.

4. I'll be more specific. When I say 'I'm experiencing a red quale', I think there are at least two key ideas we're embedding in our concept 'red quale'. One is subjectivity or inwardness: P-consciousness, unlike a conventional physical system, is structured like a vantage point plus some object-of-awareness. A second is what we might call phenomenal richness: the redness I'm experiencing is that specific hue, even though it seems like a different color (qualia inversion, alien qualia) or none at all (selective blindsight) would have sufficed.

I think our experiences' apparent inwardness is what undergirds the zombie argument. Experiences and spacetime regions seem to be structured differently, and the association between the two seems contingent, because we have fundamentally different mental modules for modeling physical v. mental facts. You can always entertain the possibility that something is a zombie, and you can always entertain the possibility that something (e.g., a rock, or a starfish) has a conscious inner life, without thereby imagining altering its physical makeup. Imagining that a rock could be on fire without changing its physical makeup seems absurd, because fire and rocks are in the same magisterium; and imagining that an experience of disgust could include painfulness without changing its phenomenal character seems absurd, because disgust and pain are in the same magisterium; but when you cross magisteria, anything goes, at least in terms of what our brains allow us to posit in thought experiments.

Conceptually, mind and matter operate like non-overlapping magisteria; but an agent could have a conceptual division like that without actually being P-conscious or actually having an 'inside' irreducibly distinct from its physical 'outside'. You could design an AI like that, much like Chalmers imagines designing an AI that spontaneously outputs 'I think therefore I am' and 'my experiences aren't fully reducible to any physical state'.

5. Phenomenal richness, I think, is a lot more difficult to make sense of (for physicalists) than inwardness. Chalmers gestures toward some explanations, but it still seems hard to tell an evolutionary/cognitive story here. The main reframe I find useful here is to recognize that introspected experiences aren't atoms; they have complicated parts, structures, and dynamics. In particular, we can peek under the hood by treating them as metacognitive representations of lower-order neural states. (E.g., the experience of pain perhaps represents somatic damage, but it also represents the nociceptors carrying pain signals to my brain.)

With representation comes the possibility of misrepresentation. Sentence-shaped representations ('beliefs') can misrepresent, when people err or are deluded; and visual-field-shaped representations ('visual perceptions') can misrepresent, when people are subject to optical illusions or hallucinations. The metacognitive representations (of beliefs, visual impressions, etc.) we call 'conscious experiences', then, can also misrepresent what features are actually present in first-order experiences.

Dennett makes a point like this, but he treats the relevant metarepresentations as sentence-shaped 'judgments' or 'hunches'. I would instead say that the relevant metarepresentations look like environmental perceptions, not like beliefs.

When conscious experience is treated like a real object 'grasped' by a subject, it's hard to imagine how you could be wrong about your experience -- after all, it's right there! But when I try to come up with a neural mechanism for my phenomenal judgments, or a neural correlate for my experience of phenomenal 'manifestness', I run into the fact that consciousness is a representation like any other, and can have representational content that isn't necessarily there.

In other words, it is not philosophically or scientifically obligatory to treat the introspectible contents of my visual field as real objects I grasp; one can instead treat them as intentional objects, promissory notes that may or may not be fulfilled. It is a live possibility that human introspection : a painting of a unicorn :: phenomenal redness : a unicorn, even though the more natural metaphor is to think of phenomenal redness as the painting's 'paint'. More exactly, the analogy is to a painting of a painting, where the first painting mostly depicts the second accurately, but gets a specific detail (e.g., its saturation level or size) systematically wrong.

One nice feature of this perspective shift is that treating phenomenal redness as an intentional object doesn't prove that it isn't present; but it allows us to leave the possibility of absence open at the outset, and evaluate the strengths and weaknesses of eliminativism, reductionism, and fundamentalism without assuming the truth or falsity of any one at the outset.

comment by Kerry Vaughan (kerry-vaughan) · 2021-11-10T23:42:16.682Z · LW(p) · GW(p)

It seems to me that you're arguing against a view in the family of claims that include "It seems like the one thing I can know for sure is that I'm having these experiences" but I'm having trouble determining the precise claim you are refuting. I think this is because I'm not sure which claims that are meant precisely and which are meant rhetorically or directionally. 

Since this is a complex topic which lots of potential distinctions to be made, it might be useful to determine your views on a few different claims in the family of "It seems like the one thing I can know for sure is that I'm having these experiences" to determine where the disagreement lies.

Below are some claims in this family. Can you pinpoint which you think are fallible and which you think are infallible (if any)? Assuming that many or most of them are fallible can you give me a sense of something like "how susceptible to fallibility" you think they are? (Also if you don't mind, it might be useful to distinguish your views from what your-model-of-Geoff thinks to help pinpoint disagreements.) Feel free to add additional claims if they seem like they would do a better job of pinpointing the disagreement.

  1. I am, I exist (i.e., the Cartesian cogito).
  2. I am thinking.
  3. I am having an experience.
  4. I am experiencing X.
  5. I experienced X.
  6. I am experiencing X because there is an X-producing thing in the world.
  7. I believe X.
  8. I am having the experience of believing X.

Edit: Wrote this before seeing this comment [LW(p) · GW(p)], so apologies if this doesn't interact with the content there.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-11-11T00:32:15.863Z · LW(p) · GW(p)

I don't think people should be certain of anything; see How to Convince Me That 2 + 2 = 3 [LW · GW]; Infinite Certainty [? · GW]; and 0 and 1 Are Not Probabilities [? · GW].

We can build software agents that live in virtual environments we've constructed, and we can program the agents to never make certain kinds of mistakes (e.g., never make an invalid reasoning step, or never misperceive the state of tiles they're near). So in that sense, there's nothing wrong with positing 'faculties that always get the right answer in practice', though I expect these to be much harder to evolve than to design.

But a software agent in that environment shouldn't be able to arrive at 100% certainty that one of its faculties is infallible, if it's a smart Bayesian. Even we, the programmers, can't be 100% certain that we programmed the agent correctly. Even an automated proof of correctness won't get us to 100% certainty, because the theorem-prover's source code could always have some error (or the hardware it's running on could have been struck by a spare gamma ray, etc.)

1. I am, I exist (i.e., the Cartesian cogito).

It's not clear what "I" means here, but it seems fine to say that there's some persistent psychological entity roughly corresponding to the phrase "Rob Bensinger". :)

I'm likewise happy to say that "thinking", "experience", etc. can be interpreted in (possibly non-joint-carving) ways that will make them pick out real things.

Replies from: kerry-vaughan, FeepingCreature, kerry-vaughan
comment by Kerry Vaughan (kerry-vaughan) · 2021-11-11T01:56:11.705Z · LW(p) · GW(p)

It's not clear what "I" means here . . .

Oh, sorry, this was a quote from Descartes that is the closest thing that actually appears in Descartes to "I think therefore I am" (which doesn't expressly appear in the Meditations)

Descartes's idea doesn't rely on any claims about persistent psychological entities (that would require the supposition of memory, which Descartes isn't ready to accept yet!). Instead, he postulates an all-powerful entity that is specifically designed to deceive him and tries to determine whether anything at all can be known given that circumstance. He concludes that he can know that he exists because something has to do the thinking. Here is the relevant quote from the Second Meditation:

I have convinced myself that there is absolutely nothing in the world, no sky, no earth, no minds, no bodies. Does it now follow that I too do not exist? No: if I convinced myself of something then I certainly existed. But there is a deceiver of supreme power and cunning who is deliberately and constantly deceiving me. In that case I too undoubtedly exist, if he is deceiving me; and let him deceive me as much as he can, he will never bring it about that I am nothing so long as I think that I am something. So after considering everything very thoroughly, I must finally conclude that this proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind.

I find this pretty convincing personally. I'm interested in whether you think Descartes gets it wrong even here or whether you think his philosophical system gains its flaws later.


More generally, I'm still not quite sure what precise claims or what type of claim you predict you and Geoff would disagree about. My-model-of-Geoff suggests that he would agree with "it seems fine to say that there's some persistent psychological entity roughly corresponding to the phrase "Rob Bensinger"." and that "thinking", "experience", etc." pick out "real" things (depending on what we mean by "real").

Can you identify a specific claim type where you predict Geoff would think that the claim can be known with certainty and you would think otherwise?

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-11-11T05:43:32.634Z · LW(p) · GW(p)

'Can a deceiver trick a thinker into falsely believing they're a thinker?' has relevantly the same structure as 'Can you pick up a box that's not a box?' -- it deductively follows that 'no', because the thinker's belief in this case wouldn't be false.

(Though we've already established that I don't believe in infinite certainty. I forgive Descartes for living 60 years before the birth of Thomas Bayes, however. :) And Bayes didn't figure all this out either.)

Because the logical structure is trivial -- Descartes might just as well have asked 'could a deceiver make 2 + 2 not equal 4?' -- I have to worry that Descartes is sneaking in more content that is in fact deducible here. For example, 'a thought exists, therefore a thinker exists' may not be deductively true, depending on what is meant by 'thought' and 'thinker'. A lot of philosophers have commented that Descartes should have limited his conclusion to 'a thought exists' (or 'a mental event exists'), rather than 'a thinker exists'.

Can you identify a specific claim type where you predict Geoff would think that the claim can be known with certainty and you would think otherwise?

'Phenomenal consciousness exists'.

I'd guess also truths of arithmetic, and such? If Geoff is Bayesian enough to treat those as probabilistic statements, that would be news to me!

Replies from: kerry-vaughan
comment by Kerry Vaughan (kerry-vaughan) · 2021-11-11T20:32:32.707Z · LW(p) · GW(p)

'Phenomenal consciousness exists'.

Sorry if this comes off as pedantic, but I don't know what this means. The philosopher in me keeps saying "I think we're playing a language game," so I'd like to get as precise as we can. Is there a paper or SEP article or blog post or something that I could read which defines the meaning of this claim or the individual terms precisely? 

Because the logical structure is trivial -- Descartes might just as well have asked 'could a deceiver make 2 + 2 not equal 4?'

[...]

I'd guess also truths of arithmetic, and such? If Geoff is Bayesian enough to treat those as probabilistic statements, that would be news to me!

I don't know Geoff's view, but Descartes thinks he can be deceived about mathematical truths (I can dig up the relevant sections from the Meditations if helpful). That's not the same as "treating them as probabilistic statements," but I think it's functionally the same from your perspective. 

The project of the Meditations is that Descartes starts by refusing to accept anything which can be doubted and then he tries to nevertheless build a system of knowledge from there. I don't think Descartes would assign infinite certainty to any claim except, perhaps, the cogito.

Replies from: dxu, RobbBB, TAG
comment by dxu · 2021-11-11T21:17:13.225Z · LW(p) · GW(p)

My view of Descartes' cogito is either that (A) it is a standard claim, in which case all the usual rules apply, including the one about infinite certainty not being allowed, or (B) it is not a standard claim, in which case the usual rules don't apply, but also it becomes less clear that the cogito is actually a thing which can be "believed" in a meaningful sense to begin with.

I currently think (B) is much closer to being the case than (A). When I try to imagine grounding and/or operationalizing the cogito by e.g. designing a computer program that makes the same claim for the same psychological reasons, I run into a dead end fairly quickly, which in my experience is strong evidence that the initial concept was confused and/or incoherent. Here's a quick sketch of my reasoning:

Suppose I have a computer program that, when run, prints "I exist" onto the screen. Moreover, suppose this computer program accomplishes this via means of a simple print statement; there is no internal logic, no if-then conditional structure, that modulates the execution of the print statement, merely the naked statement, which is executed every time the program runs. Then I ask: is there a meaningful sense in which the text the program outputs is correct?

It seems to me that, on the one hand, that the program cannot possibly be wrong here. Perhaps the statement it has printed is meaningless, but that does not make it false; and conversely if the program's output were to be interpreted as having meaning, then it seems obvious that the statement in question ("I exist") is correct, since the program does in fact exist and was run.

But this latter interpretation feels very suspicious to me indeed, since it suggests that we have managed to create a "meaningful" statement with no truth-condition; by hypothesis there is no internal logic, no conditional structure, no checks that the program administers before outputting its claim to exist. This does not (intuitively) seem to me as though it captures the spirit of Descartes' cogito; I suspect Descartes himself would be quite unsatisfied with the notion that such a program outputs the statement for the same reasons he does.

But when I try to query my intuition, to ask it "Then what reasons are those, exactly?", I find that I come up blank. It's a qualitatively similar experience to asking what the truth-condition is for a tautology, e.g. 2 + 2 = 4, except even worse than that, since I could at the very least imagine a world in which 2 + 2 != 4 [LW · GW], whereas I cannot even imagine an if-then conditional statement that would capture the (supposed) truth-condition of Descartes' cogito. The closest (flawed) thing my intuition outputs looks like this:

if (I AM ACTUALLY BEING RUN RIGHT NOW):
	print("I exist")
else if (I AM NOT BEING RUN, ONLY DISCUSSED HYPOTHETICALLY):
	print("I don't exist")

Which is obvious nonsense. Obviously. (Though it does inspire an amusing idea for a mathematical horror story about an impossible computer program whose behavior when investigated using static analysis completely differs from its behavior when actually run, because at the beginning of the program is a metaphysical conditional statement that executes different code depending on whether it detects itself to be in static analysis versus actual execution.)

Anyway, the upshot of all this is that I don't think Descartes' statement is actually meaningful. I'm not particularly surprised by this; to me, it dovetails strongly with the heuristic "If you're a dealing with a claim that seems to ignore the usual rules, it's probably not a 'claim' in the usual sense", which would have immediately flagged Descartes for the whole infinite certainty thing, without having to go through the whole "How would I write a computer program that exhibits this behavior for the same reason humans exhibit it?" song-and-dance.

(And for the record: there obviously is a reason humans find Descartes' argument so intuitively compelling, just as there is a reason humans find the idea of qualia so intuitively compelling. I just think that, as with qualia, the actual psychological reason--of the kind that can be implemented in a real computer program, not a program with weird impossible metaphysical conditional statements--is going to look very different from humans' stated justifications for the claims in question.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-16T04:08:56.470Z · LW(p) · GW(p)

I think this is quite a wrongheaded way to think about Descartes’ cogito. Consider this, for instance:

My view of Descartes’ cogito is either that (A) it is a standard claim, in which case all the usual rules apply, including the one about infinite certainty not being allowed, or (B) it is not a standard claim, in which case the usual rules don’t apply, but also it becomes less clear that the cogito is actually a thing which can be “believed” in a meaningful sense to begin with.

But precisely the point is that Descartes has set aside “all the usual rules”, has set aside “philosophical scaffolding”, epistemological paradigms, and so on, and has started with (as much as possible) the bare minimum that he could manage: naive notions of perception and knowledge, and pretty much nothing else. He doubts everything, but ends up realizing that he can’t seem to coherently doubt his own existence, because whoever or whatever he is, he can at least define himself as “whoever’s thinking these thoughts”—and that someone is thinking those thoughts is self-demonstrating.

To put it another way: consider what Descartes might say, if you put your criticisms to him. He might say something like:

“Whoa, now, hold on. Rules about infinite certainty? Probability theory? Philosophical commitments about the nature of beliefs and claims? You’re getting ahead of me, friend. We haven’t gotten there yet. I don’t know about any of those things; or maybe I did, but then I started doubting them all. The only thing I know right now is, I exist. I don’t even know that you exist! I certainly do not propose to assent to all these ‘rules’ and ‘standards’ you’re talking about—at least, not yet. Maybe after I’ve built my epistemology up, we’ll get back to all that stuff. But for now, I don’t find any of the things you’re saying to have any power to convince me of anything, and I decline to acknowledge the validity of your analysis. Build it all up for me, from the cogito on up, and then we’ll talk.”

Descartes, in other words, was doing something very basic, philosophically speaking—something that is very much prior to talking about “the usual rules” about infinite certainty and all that sort of thing.


Separately from all that, what you say about the hypothetical computer program (with the print statement) isn’t true. There is a check that’s being run: namely, the ability of the program to execute. Conditional on successfully being able to execute the print statement, it prints something. A program that runs, definitionally exists; its existence claim is satisfied thereby.

Replies from: dxu
comment by dxu · 2021-11-16T05:25:40.633Z · LW(p) · GW(p)

But precisely the point is that Descartes has set aside “all the usual rules”, has set aside “philosophical scaffolding”, epistemological paradigms, and so on,

I initially wanted to preface my response here with something like "to put it delicately", but then I realized that Descartes is dead and cannot take offense to anything I say here, and so I will be indelicate in my response:

I trust "the usual rules" far more than I trust the output of Descartes' brain, especially when the brain in question has chosen to deliberately "set aside" those rules. The rules governing correct cognition are clear, comprehensible, and causally justifiable [LW · GW]; the output of human brains that get tangled up in their own thoughts while chasing (potentially) imaginary distinctions is rather... less so. This is true in general, but especially true in this case, since I can see that Descartes' statement resists precisely the type of causal breakdown that would convince me he was, in fact, emitting (non-confused, coherent) facts entangled with reality.

and has started with (as much as possible) the bare minimum that he could manage: naive notions of perception and knowledge, and pretty much nothing else.

Taking this approach with respect to e.g. optical illusions would result in the idea that parallel lines sometimes aren't parallel. Our knowledge of basic geometry and logic leads us to reject this notion, and for good reason; we hold (and are justified in holding) greater confidence in our grasp of geometry and logic, than we do in the pure, naked perception we have of the optical illusion in question. The latter may be more "primal" in some sense, but I see no reason more "primal" forms of brain-fumbling should be granted privileged epistemic status; indeed, the very use of the adjective "naive" suggests otherwise.

He doubts everything, but ends up realizing that he can’t seem to coherently doubt his own existence, because whoever or whatever he is, he can at least define himself as “whoever’s thinking these thoughts”—and that someone is thinking those thoughts is self-demonstrating.

In short, Descartes has convinced himself of a statement that may or may not be meaningful (but which resists third-person analysis in a way that should be highly suspicious to anyone familiar with the usual rules governing belief structure), and his defense against the charge that he is ignoring the rules is that he's thought about stuff real hard while ignoring the rules, and the "stuff" in question seems to check out. I consider it quite reasonable to be unimpressed by this justification.

To put it another way: consider what Descartes might say, if you put your criticisms to him. He might say something like:

“Whoa, now, hold on. Rules about infinite certainty? Probability theory? Philosophical commitments about the nature of beliefs and claims? You’re getting ahead of me, friend. We haven’t gotten there yet. I don’t know about any of those things; or maybe I did, but then I started doubting them all. The only thing I know right now is, I exist. I don’t even know that you exist! I certainly do not propose to assent to all these ‘rules’ and ‘standards’ you’re talking about—at least, not yet. Maybe after I’ve built my epistemology up, we’ll get back to all that stuff. But for now, I don’t find any of the things you’re saying to have any power to convince me of anything, and I decline to acknowledge the validity of your analysis. Build it all up for me, from the cogito on up, and then we’ll talk.”

Certainly. And just as Descartes may feel from his vantage point that he is justified in ignoring the rules, I am justified in saying, from my vantage point, that he is only sabotaging his own efforts by doing so. The difference is that my trust in the rules comes from something explicable, whereas Descartes' trust in his (naive, unconstrained) reasoning comes from something inexplicable; and I fail to see why the latter should be seen as anything but an indictment of Descartes.

Descartes, in other words, was doing something very basic, philosophically speaking—something that is very much prior to talking about “the usual rules” about infinite certainty and all that sort of thing.

At risk of hammering in the point too many times: "prior" does not correspond to "better". Indeed, it is hard to see why one would take this attitude (that "prior" knowledge is somehow more trustworthy than models built on actual reasoning) with respect to a certain subset of questions classed as "philosophical" questions, when virtually every other human endeavor has shown the opposite to be the case: learning more, and knowing more, causes one to make fewer mistakes in one's reasoning and conclusions. If Descartes wants to discount a certain class of reasoning in his quest for truth, I submit that he has chosen to discount the wrong class.

Separately from all that, what you say about the hypothetical computer program (with the print statement) isn’t true. There is a check that’s being run: namely, the ability of the program to execute. Conditional on successfully being able to execute the print statement, it prints something. A program that runs, definitionally exists; its existence claim is satisfied thereby.

A key difference here: what you describe is not a check that is being run by the program, which is important because it is the program that finds itself in an analogous situation to Descartes.

What you say is, of course, true to any outside observer; I, seeing the program execute, can certainly be assured of its existence. But then, I can also say the same of Descartes: if I were to run into him in the street, I would not hesitate to conclude that he exists, and he need not even assert his existence aloud for me to conclude this. Moreover, since I (unlike Descartes) am not interested in the project of "doubting everything", I can quite confidently proclaim that this is good enough for me.

Ironically enough, it is Descartes himself who considers this insufficient. He does not consider it satisfactory for a program to merely execute; he wants the program to know that it is being executed. For this it is not sufficient to simply assert "The program is being run; that is itself the check on its existence"; what is needed is for the program to run an internal check that somehow manages to detect its metaphysical status (executing, or merely being subjected to static analysis?). That this is definitionally absurd goes without saying.

And of course, what is sauce for the goose is sauce for the gander; if a program cannot run such a check even in principle, then what reason do I have to believe that Descartes' brain is running some analogous check when he asserts his famous "Cogito, ergo sum"? Far more reasonable, I claim, to suspect that his brain is not running any such check, and that his resulting statement is meaningless at best, and incoherent at worst.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-16T20:21:22.021Z · LW(p) · GW(p)

I trust “the usual rules” far more than I trust the output of Descartes’ brain, especially when the brain in question has chosen to deliberately “set aside” those rules.

But this is not the right question. The right question is, do you trust “the usual rules” more than you trust the output of your own brain (or, analogously, does Descartes trust “the usual rules” more than he trusts the output of his own brain)?

And there the answer is not so obvious. After all, it’s your own brain that stores the rules, your own brain that implements them, your own brain that was convinced of their validity in the first place…

What Descartes is doing, then, is seeing if he can re-generate “the usual rules”, with his own brain (and how else?), having first set them aside. In other words, he is attempting to check whether said rules are “truly part of him”, or whether they are, so to speak, foreign agents who have sneaked into his brain illicitly (through unexamined habit, indoctrination, deception, etc.).

Thus, when you say:

The rules governing correct cognition are clear, comprehensible, and causally justifiable [LW · GW]; the output of human brains that get tangled up in their own thoughts while chasing (potentially) imaginary distinctions is rather… less so. This is true in general, but especially true in this case, since I can see that Descartes’ statement resists precisely the type of causal breakdown that would convince me he was, in fact, emitting (non-confused, coherent) facts entangled with reality.

… Descartes may answer:

“Ah, but what is it that reasons thus? Is it not that very same fallible brain of yours? How sure are you that your vaunted rules are not, as you say, ‘imaginary distinctions’? Let us take away the rules, and see if you can build them up again. Or do you imagine that you can step outside yourself, and judge your own thoughts from without, as an impartial arbiter, free of all your biases and failings? None but the Almighty have such power!”

Taking this approach with respect to e.g. optical illusions would result in the idea that parallel lines sometimes aren’t parallel. Our knowledge of basic geometry and logic leads us to reject this notion, and for good reason; we hold (and are justified in holding) greater confidence in our grasp of geometry and logic, than we do in the pure, naked perception we have of the optical illusion in question. The latter may be more “primal” in some sense, but I see no reason more “primal” forms of brain-fumbling should be granted privileged epistemic status; indeed, the very use of the adjective “naive” suggests otherwise.

Now this is a curious example indeed! After all, if we take the “confidence in our grasp of geometry and logic” approach too far, then we will fail to discover that parallel lines are, in fact, sometimes not parallel. (Indeed, the oldest use case of geometry—the one that gave the discipline its name—is precisely an example of a scenario where the parallel postulate does not hold…)

And this is just the sort of thing we might discover if we make a habit of questioning what we think we know, even down to fundamental axioms.

He doubts everything, but ends up realizing that he can’t seem to coherently doubt his own existence, because whoever or whatever he is, he can at least define himself as “whoever’s thinking these thoughts”—and that someone is thinking those thoughts is self-demonstrating.

In short, Descartes has convinced himself of a statement that may or may not be meaningful (but which resists third-person analysis in a way that should be highly suspicious to anyone familiar with the usual rules governing belief structure), and his defense against the charge that he is ignoring the rules is that he’s thought about stuff real hard while ignoring the rules, and the “stuff” in question seems to check out. I consider it quite reasonable to be unimpressed by this justification.

Once again, you seem to be taking “the usual rules” as God-given, axiomatically immune to questioning, while Descartes… isn’t. I consider it quite reasonable to be more impressed with his approach than with yours. If you object, merely consider that someone had to come up with “the usual rules” in the first place—and they did not have said rules to help them.

Certainly. And just as Descartes may feel from his vantage point that he is justified in ignoring the rules, I am justified in saying, from my vantage point, that he is only sabotaging his own efforts by doing so. The difference is that my trust in the rules comes from something explicable, whereas Descartes’ trust in his (naive, unconstrained) reasoning comes from something inexplicable; and I fail to see why the latter should be seen as anything but an indictment of Descartes.

Explicable to whom? To yourself, yes? But who or what is it that evaluates these explanations, and judges them to be persuasive, or not so? It’s your own brain, with all its failings… after all, surely you were not born knowing these rules you take to be so crucial? Surely you had to be convinced of their truth in the first place? On what did you rely to judge the rules (not having them to start with)?

The fact is that you can’t avoid using your own “naive, unconstrained” reasoning at some point. Either your mind is capable of telling right reasoning from wrong, or it is not; the recursion bottoms out somewhere. You can’t just defer to “the rules”. At the very least, that closes off the possibility that the rules might contain errors.

At risk of hammering in the point too many times: …

Now, in this paragraph I think you have some strange confusion. I am not quite sure what claim or point of mine you take this to be countering.

… what is needed is for the program to run an internal check that somehow manages to detect its metaphysical status (executing, or merely being subjected to static analysis?). That this is definitionally absurd goes without saying.

Hmm, I think it doesn’t go without saying, actually; I think it needs to be said, and then defended. I certainly don’t think it’s obviously true that a program can’t determine whether it’s running or not. I do think that any received answer to such a question can only be “yes” (because in the “no” case, the question is never asked, and thus no answer can be received).

But why is this a problem, any more than it’s a problem that, e.g., the physical laws that govern our universe are necessarily such that they permit our existence (else we would not be here to inquire about them)? This seems like a fairly straightforward case of anthropic reasoning, and we are all familiar with that sort of thing, around here…

Replies from: dxu
comment by dxu · 2021-11-18T05:36:34.953Z · LW(p) · GW(p)

But this is not the right question. The right question is, do you trust “the usual rules” more than you trust the output of your own brain (or, analogously, does Descartes trust “the usual rules” more than he trusts the output of his own brain)?

I certainly do! I have observed the fallibility of my own brain on numerous past occasions, and any temptation I might have had to consider myself a perfect reasoner has been well and truly quashed by those past observations. Indeed, the very project we call "rationality" is premised on the notion that our naive faculties are woefully inadequate; after all, one cannot have aspirations of "increasing" one's rationality without believing that one's initial starting point is one of imperfect rationality.

… Descartes may answer:

“Ah, but what is it that reasons thus? Is it not that very same fallible brain of yours? How sure are you that your vaunted rules are not, as you say, ‘imaginary distinctions’? Let us take away the rules, and see if you can build them up again. Or do you imagine that you can step outside yourself, and judge your own thoughts from without, as an impartial arbiter, free of all your biases and failings? None but the Almighty have such power!”

Indeed, I am fallible, and for this reason I cannot rule out the possibility that I have misapprehended the rules, and that my misapprehensions are perhaps fatal. However, regardless of however much my fallibility reduces my confidence in the rules, it inevitably reduces my confidence in my ability to perform without rules by an equal or greater amount; and this seems to me to be right, and good.

...Or, to put it another way: perhaps I am blind, and in my blindness I have fumbled my way to a set of (what seem to me to be) crutches. Should I then discard those crutches and attempt to make my way unassisted, on the grounds that I may be mistaken about whether they are, in fact, crutches? But surely I will do no better on my own, than I will by holding on to the crutches for the time being; for then at least the possibility exists that I am not mistaken, and the objects I hold are in fact crutches. Any argument that might lead me to make the opposite choice is quite wrongheaded indeed, in my view.

Now this is a curious example indeed! After all, if we take the “confidence in our grasp of geometry and logic” approach too far, then we will fail to discover that parallel lines are, in fact, sometimes not parallel. (Indeed, the oldest use case of geometry—the one that gave the discipline its name—is precisely an example of a scenario where the parallel postulate does not hold…)

And this is just the sort of thing we might discover if we make a habit of questioning what we think we know, even down to fundamental axioms.

It is perhaps worth noting that the sense in which "parallel lines are not parallel" which you cite is quite different from the sense in which our brains misinterpret the café wall illusion. And in light of this, it is perhaps also notable that the eventual development of non-Euclidean geometries was not spurred by this or similar optical illusions.

Which is to say: our understanding of things may be flawed or incomplete in certain ways. But we do not achieve a corrected understanding of those things by discarding our present tools wholesale (especially on such flimsy evidence as naive perception); we achieve a corrected understanding by poking and prodding at our current understanding, until such time as our efforts bear fruit.

(In the "crutch" analogy: perhaps there exists a better set of crutches, somewhere out there for us to find. This nonetheless does not imply that we ought discard our current crutches in anticipation of the better set; we will stand a far better chance of making our way to the better crutches, if we rely on the crutches we have in the meantime.)

Once again, you seem to be taking “the usual rules” as God-given, axiomatically immune to questioning, while Descartes… isn’t.

Certainly not; but fortunately this rather strong condition is not needed for me to distrust Descartes' reasoning. What is needed is simply that I trust "the usual rules" more than I trust Descartes; and for further clarification on this point you need merely re-read what I wrote above about "crutches".

Explicable to whom? To yourself, yes? But who or what is it that evaluates these explanations, and judges them to be persuasive, or not so? It’s your own brain, with all its failings… after all, surely you were not born knowing these rules you take to be so crucial? Surely you had to be convinced of their truth in the first place? On what did you rely to judge the rules (not having them to start with)?

The fact is that you can’t avoid using your own “naive, unconstrained” reasoning at some point. Either your mind is capable of telling right reasoning from wrong, or it is not; the recursion bottoms out somewhere. You can’t just defer to “the rules”. At the very least, that closes off the possibility that the rules might contain errors.

I believe my above arguments suffice to answer this objection.

[...] I certainly don’t think it’s obviously true that a program can’t determine whether it’s running or not.

Suppose a program is not, in fact, running. How do you propose that the program in question detect this state of affairs?

I do think that any received answer to such a question can only be “yes” (because in the “no” case, the question is never asked, and thus no answer can be received).

But why is this a problem, any more than it’s a problem that, e.g., the physical laws that govern our universe are necessarily such that they permit our existence (else we would not be here to inquire about them)? This seems like a fairly straightforward case of anthropic reasoning, and we are all familiar with that sort of thing, around here…

If the only possible validation of Descartes' claim to exist is anthropic in nature, then this is tantamount to saying that his cogito is untenable. After all, "I think, therefore I am" is semantically quite different from "I assert that I am, and this assertion is anthropically valid because you will only hear me say it in worlds where it happens to be true."

In fact, I suspect that Descartes would agree with me on this point, and complain that—to the extent you are reducing his claim to a mere instance of anthropic reasoning—you are immeasurably weakening it. To quote from an earlier comment of mine:

It seems to me that, on the one hand, that the program cannot possibly be wrong here. Perhaps the statement it has printed is meaningless, but that does not make it false; and conversely if the program's output were to be interpreted as having meaning, then it seems obvious that the statement in question ("I exist") is correct, since the program does in fact exist and was run.

But this latter interpretation feels very suspicious to me indeed, since it suggests that we have managed to create a "meaningful" statement with no truth-condition; by hypothesis there is no internal logic, no conditional structure, no checks that the program administers before outputting its claim to exist. This does not (intuitively) seem to me as though it captures the spirit of Descartes' cogito; I suspect Descartes himself would be quite unsatisfied with the notion that such a program outputs the statement for the same reasons he does.

comment by Rob Bensinger (RobbBB) · 2021-11-11T21:08:17.609Z · LW(p) · GW(p)

Sorry if this comes off as pedantic, but I don't know what this means. The philosopher in me keeps saying "I think we're playing a language game," so I'd like to get as precise as we can. Is there a paper or SEP article or blog post or something that I could read which defines the meaning of this claim or the individual terms precisely? 

We're all philosophers here, this is a safe space for pedantry. :) 

Below, I'll use the words 'phenomenal property' and 'quale' interchangeably.

An example of a phenomenal property is the particular redness of a particular red thing in my visual field.

Geoff would say he's certain, while he's experiencing it, that this property is instantiated.

I would say that there's no such property, though there is a highly similar property that serves all the same behavioral/cognitive/functional roles (and just lacks that extra 'particular redness', and perhaps that extra 'inwardness / inner-light-ness / interiority / subjectivity / perspectivalness' -- basically, lacks whatever aspects make the hard problem seem vastly harder than the 'easy' problems of reducing other mental states to physical ones).

This, of course, is a crazy-sounding view on my part. It's weird that I even think Geoff and I have a meaningful, substantive disagreement. Like, if I don't think that Geoff's brain really instantiates qualia, then what do I think Geoff even means by 'qualia'? How does Geoff successfully refer to "qualia, if he doesn't have them? Why not just say that 'qualia' refers to something functional?

Two reasons:

  • I think hard-problem intuitions are grounded in a quasi-perceptual illusion, not a free-floating delusion.

    If views like Geoff's and David Chalmers' were grounded in a free-floating delusion, then we would just say 'they have a false belief about their experiences' and stop there.

    If we're instead positing that there's something analogous to an optical illusion happening in people's basic perception of their own experiences, then it makes structural sense to draw some distinction between 'the thing that's really there' and 'the thing that's not really there, but seems to be there when we fall for the illusion'.

    I may not think that the latter concept really and truly has the full phenomenal richness that Geoff / Chalmers / etc. think it does (for the same reason it's hard to imagine a p-zombie having a full and correct conception of 'what red looks like'). But I'm still perfectly happy to use the word 'qualia' to refer to it, keeping in mind that I think our concept of 'qualia' is more like 'a promissory note for "the kind of thing we'd need to instantiate in order to justify hard-problem arguments"' -- it's a p-zombie's notion of qualia, though the p-zombie may not realize it.
     
  • I think the hard-problem reasoning is correct, in that if we instantiated properties like those we (illusorily) appear to have, then physicalism would be false, there would be 'further facts' over and above the physics facts (that aren't logically entailed/constrained by physics), etc.

Basically, I'm saying that a p-zombie's concept of 'phenomenal consciousness' (or we can call it 'blenomenal consciousness' or something, if we want to say that p-zombies lack the 'full' concept) is distinct from the p-zombie's concept of 'the closest functional/reducible analog of phenomenal consciousness'. I think this isn't a weird view. The crazy part is when I take the further step of asserting that we're p-zombies. :)

I don't know Geoff's view, but Descartes thinks he can be deceived about mathematical truths (I can dig up the relevant sections from the Meditations if helpful).

Interesting!

comment by TAG · 2021-11-11T23:03:36.570Z · LW(p) · GW(p)

‘Phenomenal consciousness exists’.

Sorry if this comes off as pedantic, but I don’t know what this means

It doesn't have to mean anything strange or remarkable. It's basically ordinary waking consciousness. If you are walking around noticing sounds and colours smells ,that's phenomenal consciousness. As opposed to things that actually are strange , like blindsight or sleepwalking.

But it can be overloaded with other, more controversial, ideas, such as the idea that it is incorrigible (how we got on to the subject), or necessarily non-physical.

comment by FeepingCreature · 2021-11-16T04:02:58.467Z · LW(p) · GW(p)

I think it can be reasonable to have 100% confidence in beliefs where the negation of the belief would invalidate the ability to reason, or to benefit from reason. Though with humans, I think it always makes sense to leave an epsilon for errors of reason.

comment by Kerry Vaughan (kerry-vaughan) · 2021-11-11T01:31:23.695Z · LW(p) · GW(p)

I don't think people should be certain of anything

What about this claim itself?

Replies from: dxu
comment by dxu · 2021-11-11T02:13:50.436Z · LW(p) · GW(p)

[Disclaimer: not Rob, may not share Rob's views, etc. The reason I'm writing this comment nonetheless is that I think I share enough of Rob's relevant views here (not least because I think Rob's views on this topic are mostly consonant with the LW "canon" view) to explain. Depending on how much you care about Rob's view specifically versus the LW "canon" view, you can choose to regard or disregard this comment as you see fit.]

I don't think people should be certain of anything

What about this claim itself?

I don't think this is the gotcha [I think] you think it is. I think it is consistent to hold that (1) people should not place infinite certainty in any beliefs, including meta-beliefs about the normative best way to construct beliefs, and that (2) since (1) is itself a meta-belief, it too should not be afforded infinite certainty.

Of course, this conjunction has the interesting quality of feeling somewhat paradoxical, but I think this feeling doesn't stand up to scrutiny. There doesn't seem to me to be any actual contradiction you can derive from the conjunction of (1) and (2); the first seems simply to be a statement of a paradigm that one currently believes to be normative, and the second is a note that, just because one currently believes a paradigm to be normative, does not necessarily mean that that paradigm is normative. The fact that this second note can be construed as coming from the paradigm itself does not undermine it in my eyes; I think it is perfectly fine for paradigms to exist that fail to assert their own correctness.

I think, incidentally, that there are many people who [implicitly?] hold the negation of the above claim, i.e. they hold that (3) a valid paradigm must be one that has faith in its own validity. The paradigm may still turn out to be false, but this ought not be a possibility that is endorsed from inside the paradigm; just as individuals cannot consistently assert themselves to be mistaken about something (even if they are in fact mistaken), the inside of a paradigm ought not be the kind of thing that can undermine itself. If you hold something like (3) to be the case, then and only then does your quoted question become a gotcha.

Naturally, I think (3) is mistaken. Moreover, I not only think (3) is mistaken, I think it is unreasonable, i.e. I think there is no good reason to want (3) to be the case. I think the relevant paradox here is not Moore's, but the lottery paradox, which I assert is not a paradox at all (though admittedly counterintuitive if one is not used to thinking in probabilities rather than certainties).

[There is also a resemblance here to Godel's (second) incompleteness theorem, which asserts that sufficiently powerful formal systems cannot prove their own consistency unless they are actually inconsistent. I think this resemblance is more surface-level than deep, but it may provide at least an intuition that (1) there exist at least some "belief systems" that cannot "trust" themselves, and that (2) this is okay.]

Replies from: kerry-vaughan
comment by Kerry Vaughan (kerry-vaughan) · 2021-11-11T03:12:55.695Z · LW(p) · GW(p)

On reflection, it seems right to me that there may not be a contradiction here. I'll post something later if I conclude otherwise.

(I think I got a bit too excited about a chance to use the old philosopher's move of "what about that claim itself.")

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-11-11T05:34:22.450Z · LW(p) · GW(p)

:) Yeah, it is an interesting case but I'm perfectly happy to say I'm not-maximally-certain about this.

comment by TAG · 2021-11-11T01:35:20.796Z · LW(p) · GW(p)

for panpsychism and emergent dualism alike, for property and substance and ‘aspect’ dualism alike

If you want to claim some definitive disproof of aspect dualism, a minimal requirement would be to engage with it. I' ve tried talking to you about it several times, and each time you cut off the conversation at your end.

comment by Said Achmiz (SaidAchmiz) · 2021-11-10T18:04:45.329Z · LW(p) · GW(p)

I don’t know to what extent you still endorse the quoted reasoning (as an accurate model of the mistakes being made by the sorts of people you describe), but: it seems clear to me that the big error is in step 2… and it also seems to me that step 2 is a “rookie-level” error, an error that a careful thinker shouldn’t ever make (and, indeed, that people like e.g. David Chalmers do not in fact make).

That is, the Hard Problem shouldn’t lead us to conclude that consciousness isn’t reducible to physics—only that we haven’t reduced it, and that in fact there remains an open (and hard!) problem to solve. But reasoning from the Hard Problem to a positive belief in extra-physical phenomena is surely a mistake…

Replies from: RobbBB, TAG
comment by Rob Bensinger (RobbBB) · 2021-11-10T19:11:20.095Z · LW(p) · GW(p)

? Chalmers is a panpsychist. He totally thinks phenomenal consciousness isn't fully reducible to third-person descriptions.

(I also think you're just wrong, but maybe poking at the Chalmers part will clarify things.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-10T19:14:10.955Z · LW(p) · GW(p)

Now, hold on: your phrasing seems to suggest that panpsychism either is the same thing as, or entails, thinking that “phenomenal consciousness isn’t fully reducible to third-person descriptions”. But… that’s not the case, as far as I can tell. Did I misunderstand you?

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-11-10T19:23:03.444Z · LW(p) · GW(p)

He's the kind of panpsychist who holds that view because he thinks consciousness isn't fully reducible / third-person-describable. I think this is by far the best reason to be a panpsychist, and it's the only type of panpsychism I've heard endorsed by analytic philosophers working in academia.

I think Brian Tomasik endorses a different kind of panpsychism, which asserts that phenomenal consciousness is eliminable rather than fundamental? So I wouldn't assume that arbitrary rationalist panpsychists are in the Chalmers camp; but Chalmers certainly is!

Replies from: SaidAchmiz, TAG
comment by Said Achmiz (SaidAchmiz) · 2021-11-10T20:21:38.319Z · LW(p) · GW(p)

Hmm. Ok, I think I sort-of see in what direction to head to resolve the disagreement/confusion we’ve got here (and I am very unsure whether I am more confused, of the two of us, or you are, though maybe we both are)… but I don’t think that I can devote the time / mental effort to this discussion at this time. Perhaps we can come back to it another time? (Or not; it’s not terribly important, I don’t think…)

comment by TAG · 2021-11-10T22:15:39.247Z · LW(p) · GW(p)

He’s the kind of panpsychist who holds that view because he thinks consciousness isn’t fully reducible / third-person-describable.

He’s a property dualist because he thinks consciousness isn’t fully reducible / third-person-describable. He also has a commitment to the idea that phenomemal consciousness supervenes on information processing and to the idea that human and biological information processing are not privileged , which all add up to something like panpsychism.

comment by TAG · 2021-11-10T22:02:04.345Z · LW(p) · GW(p)

That is, the Hard Problem shouldn’t lead us to conclude that consciousness isn’t reducible to physics—only that we haven’t reduced it, and that in fact there remains an open (and hard!) problem to solve. But reasoning from the Hard Problem to a positive belief in extra-physical phenomena is surely a mistake

Don't say "surely", prove it.

It's not unreasonable to say that a problem that has remained unsolved for an extended period of time, is insoluble...but it's not necessarily the case either. Your opponents are making a subjective judgement call, and so are you.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-10T22:19:30.771Z · LW(p) · GW(p)

It’s not unreasonable to say that a problem that has remained unsolved for an extended period of time, is insoluble

No, I’d say it’s pretty unreasonable, actually.

Replies from: TAG
comment by TAG · 2021-11-10T22:31:08.771Z · LW(p) · GW(p)

Don't say it's unreasonable, prove it.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-10T22:40:08.415Z · LW(p) · GW(p)

Prove that a problem is not insoluble? Why don’t you prove that it is insoluble?

The only reasonable stance in this situation is “we don’t have any very good basis for either stance”.

Replies from: TAG, TAG
comment by TAG · 2021-11-10T23:34:09.974Z · LW(p) · GW(p)

So both stances are reasonable, which is what I said, but not what you said.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-10T23:51:51.118Z · LW(p) · GW(p)

Nnnno, I think you're missing Said.

Saying that two extremes are both unreasonable is not the same as saying that those extremes are both reasonabe.

Said (if I am reading him right) is saying that it is unreasonable (i.e. unjustified) to claim that just because a problem hasn't been solved for an extended period of time, it is therefore insoluble.

To which you (seemed to me to) reply "don't just declare that [the original claim] is unreasonable.  Prove that [the original claim] is unreasonable."

To which Said (seemed to me to) answer "no, I think that there's a strong prior here that the extreme statement isn't one worth making."

My own stance: a problem remaining unsolved for a long time is weak evidence that it's fundamentally insoluble, but you really need a model of why it's insoluble before making a strong claim there.

Replies from: SaidAchmiz, TAG
comment by Said Achmiz (SaidAchmiz) · 2021-11-11T01:39:05.496Z · LW(p) · GW(p)

This is a reasonably accurate reading of my comments, yes.

comment by TAG · 2021-11-11T00:07:54.167Z · LW(p) · GW(p)

Said (if I am reading him right) is saying that it is unreasonable (i.e. unjustified) to claim that just because a problem hasn’t been solved for an extended period of time, it is therefore insoluble.

Which would be true if "reasonable" and "justified" were synonyms, but they are not.

no, I think that there’s a strong prior here that the extreme statement isn’t one worth making.”

Which statement is the one that is extreme? Is it not extreme to claim an unsolved problem will definitely be solved?

My own stance: a problem remaining unsolved for a long time is weak evidence that it’s fundamentally insoluble,

It's weak evidence, in that it's not justification, but it's some evidence , in that it's reasonable. Who are you disagreeing with?

comment by TAG · 2021-11-10T23:27:35.063Z · LW(p) · GW(p)

So both stances are reasonable, which is what I said, but not what you said.

comment by Vladimir_Nesov · 2021-11-08T11:18:14.800Z · LW(p) · GW(p)

This shapes up as a case study on the dangers of doing very speculative and abstract theory about medium-term planning. (Which might include examples like figuring out what kind of understanding is necessary to actually apply hypothetical future alignment theory in practice...)

The problem is that common sense doesn't work or doesn't exist in these situations, but it's still possible to do actionable planning, and massage the plan into a specific enough form in time to meet reality, so that reality goes according to the plan that on the side of the present adapts to it, even as on the side of the medium-term future it devolves into theoretical epicycles with no common sense propping it up.

This doesn't go bad when it's not in contact with reality, because then reality isn't hurrying it into a form that doesn't fit the emerging intuition of what the theory wants to be. And so it has time to mature into its own thing, or fade away into obscurity, but in any case there is more sanity to it formed of internal integrity. Whereas with a theoretical medium-term plan reality continually butchers the plan, which warps the theory, and human intuition is not good enough to reconcile the desiderata in a sensible way fast enough.

Replies from: Spiracular
comment by Spiracular · 2021-11-08T17:47:21.079Z · LW(p) · GW(p)

On the one hand, I think this is borderline-unintelligible as currently phrased? On the other hand, I think you have a decent point underneath it all.

Let me know if I'm following, while I try to rephrase it.


When insulated from real-world or outer-world incentives, a project can build up a lot of internal-logic and inferential distance by building upon itself repeatedly.

The incentives of insulated projects can be almost artificially-simple? So one can basically Goodhart, or massage data and assessment-metrics, to an incredible degree. This is sometimes done unconsciously.

When such a project finally comes into contact with reality, this can topple things at the very bottom of the structure, which everything else was built upon.

So for some heavily-insulated, heavily-built, and not-very-well-grounded projects, finally coming into exposure with reality can trigger a lot of warping/worldview-collapse/fallout in the immediate term.

Replies from: Spiracular, Vladimir_Nesov
comment by Spiracular · 2021-11-08T18:27:13.878Z · LW(p) · GW(p)

Now to actually comment...

(Ugh, I think I ended up borderline-incoherent myself. I might revisit and clean it up later.)

I think it's worth keeping in mind that "common social reality" is itself sometimes one of these unstable/ungrounded top-heavy many-epicycles self-reinforcing collapses-when-reality-hits structures.

I am beyond-sick of the fights about whether something is "erroneous personal reality vs social reality" or "personal reality vs erroneous social reality," so I'm going to leave simulating that out as an exercise for the reader.

loud sigh

Jumping meta, and skipping to the end.

Almost every elaborate worldview is built on at least some fragile low-level components, and might also have a few robustly-grounded builds in there, if you're lucky.

"Some generalizable truth can be extracted" is more likely to occur, if there were incentives and pressure to generate robust builds.*

* (...God, I got a sudden wave of sympathy for anyone who views Capitalists and Rationalists as some form of creepy scavengers. There is a hint of truth in that lens. I hope we're more like vultures than dogs; vultures have a way better "nutrition to parasite" ratio.)


By pure evolutionary logic: whichever thing adhered closer to common properties of base-reality, and/or was better-trained to generalize or self-update, will usually hold up better when some of its circumstances change. This tends to be part of what boils up when worldview conflicts and cataclysms play out.

I do see "better survival of a worldview across a range of circumstances" as somewhat predictive of attributes that I consider good-to-have in a worldview.

I also think surviving worldviews aren't always the ones that make people the happiest, or allow people to thrive? Sometimes that sucks.

(If anyone wants to get into "everything is all equally-ungrounded social reality?" No. That doesn't actually follow, even from the true statement that "everything you perceive goes through a lens." I threw some quick commentary on that side-branch here [LW(p) · GW(p)], but I mostly think it's off-topic.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-11-08T19:39:58.604Z · LW(p) · GW(p)

I don't know, a lot of this is from discussion of Kuhn, new paradigms/worldviews are not necessarily incentivized to say new things or make sense of new things, even though they do, they just frame them in a particular way. And when something doesn't fit a paradigm, it's ignored. This is good and inevitable for theorizing on human level, and doesn't inform usefulness or correctness of what's going on, as these things live inside the paradigm.

comment by Vladimir_Nesov · 2021-11-08T19:31:50.174Z · LW(p) · GW(p)

It's about lifecycle of theory development, confronted with incentives of medium-term planning. Humans are not very intelligent, and the way we can do abstract theory requires developing a lot of tools that enable fluency with it, including the actual intuitive fluency that uses the tools to think more rigorously, which is what I call common sense.

My anchor is math, which is the kind of theory I'm familiar with, but the topic of the theory could be things like social structures, research methodologies, or human rationality. So when common sense has an opportunity to form, we have a "post-rigorous" stage where rigid principles (gears) that make the theory lawful can be wielded intuitively. Without getting to this stage, the theory is blind or (potentially) insane. It is blind without intuition or insane when intuition is unmoored from rigor. (It can be somewhat sane when pre-rigorous intuition is grounded in something else, even if by informal analogy.)

If left alone, a theory tends to sanity. It develops principles to organize its intuitions, and develops intuitions to wield its principles. Eventually you get something real that can be seen and shaped with purpose.

But when it's not at that stage, forcing it to change will keep it unsettled longer. If the theory opines about how an organizational medium-term plan works, what it should be, yet it's unsettled, you'll get insane opinions about the plans that shape insane plans. And reality chasing the plan, forcing it to confront what actually happens at present, gives an incentive to keep changing the theory before it's ready, keeping it in this state of limbo.

comment by Linch · 2021-11-10T03:57:56.035Z · LW(p) · GW(p)

Anna Salamon:
So, I think... So, look, I - mm. It's hard to say all the things in all the orders at once. I'm going to say a different thing and then I'll [inaudible], sorry.

So, once upon a time I heard from a couple junior staff members at CFAR that you were saying bad things to them about me and CFAR.

Geoff Anders:
Believe it.

Anna Salamon:
I forget. They weren't particularly false things. So that I don't accidentally [inaudible]-

Typo?

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-11-10T04:37:22.100Z · LW(p) · GW(p)

What's the typo?

Replies from: Benito, Linch
comment by Ben Pace (Benito) · 2021-11-10T04:41:12.568Z · LW(p) · GW(p)

Geoff’s reply sounds super aggressive. I suspect he said “I believe it” or “I can believe it”.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-11-10T05:20:55.771Z · LW(p) · GW(p)

Oh, I think he said 'Believe it' (in a joking tone of voice) as a shorthand for 'I believe it.'

Replies from: RobbBB, RobbBB
comment by Rob Bensinger (RobbBB) · 2021-11-10T08:50:44.262Z · LW(p) · GW(p)

Edited to "[I] believe it."

comment by Rob Bensinger (RobbBB) · 2021-11-10T05:21:14.666Z · LW(p) · GW(p)

Perils of transcript!

comment by Linch · 2021-11-10T05:35:25.700Z · LW(p) · GW(p)

Confused what "believe it" means.

comment by WalterL · 2021-11-09T20:13:27.523Z · LW(p) · GW(p)

This all 'sounds', I dunno, kind of routine?  Like, weird terminology aside, they talked to one another a bunch, then ran out of money and closed down, yeah?  And the Zoe stuff boils down to 'we hired an actress but we are not an acting troupe so after a while she didn't have anything to do, felt useless and bailed'.

I mean, did anything 'real' come out of Leverage?  I don't want to misunderstand here.  This was a bunch of talk about demons and energies and other gibberish, but ultimately it is just 'a bunch of people got together and burned through some money', right?

I dunno, good on em for getting someone to pay them in the first place, I guess.  Talking people into writing the big checks is a big deal skill.  Maybe coach that.

Replies from: Freyja
comment by Freyja · 2021-11-09T23:27:36.260Z · LW(p) · GW(p)

I feel like if you read Zoe’s medium post, read the parts where she described enduring cPTSD symptoms like panic attacks, flashbacks and paranoia consistently for two years after leaving Leverage, and then rounded that off to ‘she felt useless and bailed’ then, idk dude, we live in two different worlds.

Replies from: Viliam
comment by Viliam · 2021-11-10T10:22:16.767Z · LW(p) · GW(p)

That was quite insensitive, I agree, but I think that Walter is asking, from the perspective of Leverage's mission, what exactly they actually did in that direction. Like, only the productivity part, not the work environment part.

If you ignore the abuse and demons and whatever, and only consider "money spent" and "things produced"... that kind of perspective. (Imagine a parallel universe, where no abuse happened, you never heard about the demons, you just sent $10000 to support Leverage because you liked the sales pitch. Would you now consider it money well spent?)

Replies from: Freyja
comment by Freyja · 2021-11-10T18:07:51.810Z · LW(p) · GW(p)

I wouldn’t, but I also wouldn’t consider that to be the case for many of the speculative startups I’ve worked at, in hindsight.

I consider ‘wasting millions of dollars’ to be a shitty thing to do, but also unfortunately common. I think focusing on whether the money was wasted is distracting (and perhaps dismissive) away from the stories being told.

This may be a crux; Walter may just value personal suffering versus use of economic resources differently to me.

comment by Viliam · 2021-11-08T22:43:00.583Z · LW(p) · GW(p)

To address Geoff's question about how to get out of the rationalist community...

What is it that made some people (including me) mistakenly assume that Geoff/Leverage was somehow connected to the rationalist community? Speaking for myself, I guess it was the fact that Geoff hangs out with famous rationalists, that Leverage when seen from outside kinda looks like the type of project that rationalists might make, and that Leverage actively recruits among rationalists.

Knowing this, and not much more (because of all the secrecy), what exactly should have made me assume that Geoff was not a rationalist, and that Leverage was actually more about fighting evil spirits using touch healing?

So I guess the answer to how to get out of the rationalist community is to reduce your involvement with rationalists, and/or make your epistemic differences a common knowledge. And you are doing quite great in this regard by the way! I assume that people who are now reading about the demons, are not going to make the same mistake again. Unless they mistakenly conclude that it is all just a metaphor for something.

And, to put it bluntly, I suspect that this confusion didn't happen by accident, but was a strategic decision on the Leverage's side. It made recruitment among rationalists much easier. (Maybe it also helped with MIRI donors.) Why didn't you approach the CFAR junior staff openly, like: "hey guys, I think rationality is boring, wanna exorcise demons instead?" If they said "hell yeah!", that would be a win/win outcome for both CFAR and Leverage; no hard feelings would result. Instead, if I understand it correctly, even Habryka was shocked to find out that Geoff/Leverage was epistemically quite far from the rationalist community, after he was already working for Leverage. So, I don't think it would be fair to blame outsiders for getting this part wrong.

Now the situation has changed, and the rationalist community's interest in Leverage became a liability. Time to cut ties and find some new company. (Also, burn down the old website, and rename the organization.)

I would be happy to forget about Geoff/Leverage and wish them good luck in their future adventures, but I am still curious about the... uhm... negative experiences reported by some former Leverage employees. Because that topic was mostly ignored in this debate.

Replies from: kerry-vaughan, ChristianKl
comment by Kerry Vaughan (kerry-vaughan) · 2021-11-10T00:12:42.603Z · LW(p) · GW(p)

As of writing (November 9, 2021) this comment has 6 Karma across 11 votes. As a newbie to LessWrong with only a general understanding of LessWrong norms, I find it surprising that the comment is positive. I was wondering if those who voted on this comment (or who have an opinion on it) would be interested in explaining what Karma score this comment should have and why.

My view based on my own models of good discussion norms is that the comment is mildly toxic and should be hovering around zero karma or in slightly negative territory for the following reasons:

  • I would describe the tone as “sarcastic” in a way that makes it hard for me to distinguish between what the OP actually thinks and what they are saying or implying for effect.
  • The post doesn’t seem to engage with Geoff’s perspective in any serious way. Instead, I would describe it as casting aspersions on a straw model of Geoff.
  • The post seems most focused on generating applause lights [LW · GW] via condemnation of Geoff than trying to explain why Geoff is part of the Rationality community despite his protestation to the contrary. (I could imagine the comment which tries to weigh the evidence about whether Geoff ought to be considered part of the Rationality community even today, but this comment isn’t it).
  • The comment repeatedly implies that Leverage was devoted to activities like “fighting evil spirits,” “using touch healing,” “exorcising demons,” etc. even though (1) the post where that is described only covers 2017-2019; (2) doesn’t specify that this kind of activity was common or typical even of her sub-group or of her overall experience; and (3) specifically notes that most people at Leverage didn’t have this experience.

I don’t think the comment is more than mildly toxic because it does raise the valid consideration that Geoff does appear to have positioned himself as at least Rationalist-adjacent early on and because none of the offenses listed above are particularly heinous. I’m sure others disagree with my assessment and I’d be interested in understanding why.

[Context: I work at Leverage now, but didn’t during Leverage 1.0 although I knew many of the people involved. I haven’t been engaging with LessWrong recently because the discussion has seemed quite toxic to me, but Speaking of Stag Hunts [LW · GW] and in particular this comment [LW(p) · GW(p)] made me a little bit more optimistic so I thought I’d try to get a clearer picture of LessWrong’s norms.]


 

Replies from: David Hornbein, Duncan_Sabien, RobbBB
comment by David Hornbein · 2021-11-10T04:35:14.526Z · LW(p) · GW(p)

"6 Karma across 11 votes" is, like, not good. It's about what I'd expect from a comment that is "mildly toxic [but] does raise [a] valid consideration" and "none of the offenses ... are particularly heinous", as you put it. (For better or worse, comments here generally don't get downvoted into the negative unless they're pretty heinous; as I write this only one comment on this post has been voted to zero, and that comment's only response describes it as "borderline-unintelligible".) It sounds like you're interpreting the score as something like qualified approval because it's above zero, but taking into account the overall voting pattern I interpret the score more like "most people generally dislike the comment and want to push it to the back of the line, even if they don't want to actively silence the voice". This would explain Rob calibrating the strength of his downvote over time [LW(p) · GW(p)].

Replies from: kerry-vaughan
comment by Kerry Vaughan (kerry-vaughan) · 2021-11-10T13:54:07.126Z · LW(p) · GW(p)

This is really helpful. Thanks!

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-10T02:45:21.586Z · LW(p) · GW(p)

I can't speak to either real-Viliam or the people upvoting or downvoting the comment, but here's my best attempt to rewrite the comment in accordance with Duncan-norms (which overlap with but are not the same as LessWrong norms).  Note that this is based off my best-guess interpretation of what real-Viliam was going for, which may not be what real-Viliam wanted or intended.  Also please note that my attempt to "improve" Viliam's comment should not be taken as a statement about whether or not it met some particular standard (even things that are already good enough can usually be made better).  I'm not exactly answering Kerry's question, just trying to be more clear about what I think good discourse norms are.

To address Geoff's question about how to get out of the rationalist community...

I got the impression from Geoff's commentary (particularly [quote] and [quote]) that he felt people were mistaken to typify him and Leverage as being substantially connected to, or straightforwardly a part of, the rationalist community.

This doesn't make much sense to me, given my current level of exposure to all this.  My understanding is that Geoff:

a) hangs out with lots of rationalist cultural leaders
b) regularly communicates using lots of words, concepts, and norms common to the rationalist community
c) actively recruits among rationalists, and 
d) runs Leverage, which seems very much typical of [the type of project a rationalist might launch].

People are free to set me straight on a, b, c, and d, if they think they're wrong, but note that any alternate explanation would need to account for the impression I formed from just kind of looking around; it won't be enough to just declare that I was wrong.

Given all that (and given that a lot of details known to Geoff and Leveragers will be opaque to the rest of us, thanks to the relatively closed nature of the project), I'm not sure how I or the median LWer was "supposed to know" that they weren't closely related to the rationalist community.

But anyway.  Setting that aside, and looking forward: if I were to offer advice, the advice would be to straightforwardly reduce Geoff's/Leverage's involvement with rationalists (move away from the Bay, change hiring practices) and/or to put some effort into injecting the epistemic and cultural differences into common knowledge.  A little ironic to e.g. write a big post about this and put it on LessWrong (like a FB post about how you're leaving FB), but that does seem like a start.

(This is not me being super charitable, but: it seems to me that the whole demons-and-crystals thing, which so far has not been refuted, to my knowledge, is also a start.  /snark)

I don't know how to soften the following, but in the spirit of disclosure:

It's my primary hypothesis that the confusion was not accidental.  In a set of 100 people making the claims Geoff is making, I think a substantial fraction of them are being at least somewhat untruthful, and in a set of 100 people who had intentionally ... parasitized? ... the rationalist community, I think more than half of them would say the sorts of things Geoff is saying now.

I recognize this hypothesis is rude, but its rudeness doesn't make it false.  I'm trying to be clear about the fact that I know it could be wrong, that things aren't always what they seem, etc.

But given what I know, there seem to be clear incentives to remaining close to the rationalist community in ways that match my impression of Geoff/Leverage's actual closeness.  e.g. it makes recruitment among rationalists much easier, makes it easier to find donors already willing to give to weird longtermist projects, etc.  And if the cultural divide were really sharp, the way (it seems to me that) Geoff is saying, and the inferential gaps genuinely wide, then I'm not sure how Leverage would have been successful at attracting the interest of e.g. multiple junior CFAR staff.  I'm reaching for a metaphor, here; what I've got is "I don't think people in seminary school often become rabbis or imams."

To be clear, I'm not saying that there isn't a big gap.  If I understand correctly, habryka was "shocked" to discover how far from central rationalist epistemics Leverage was, after already working there for a time [link].  I'm more saying "for there to be such a big gap and for it to have been so hard to spot at a casual glance is more likely to be explained by intent than by accident."

Or so it seems to me, at least.  Open to alternate explanations.  Just skeptical on priors.

(And given e.g. habryka's confusion, even with all of his local insider knowledge, it seems unreasonable to expect the median LWer or rationalist to have been less confused.)

In any event, the situation has changed.  I'm actually in support of Geoff's desire to part ways; personally I'd rather not spend much more time thinking about Leverage ever again.  But I think it requires some steps that my admittedly-sketchy model of Geoff is loath to take.  I think that "we get to divorce from the rationalists without leaving the Bay and changing the name of the org and changing our recruitment and donor pools and so on and so forth" might be a fabricated option [LW · GW].

Separately, but still pretty relevantly: this conversation didn't touch much on what seems to me to be the actual core issue, which is the experience of Zoe and others.  Understanding what happened, making sure it doesn't happen again, trying to achieve justice (or at least closure), etc.  I am curious, given that the conversation is largely here on LW, now, when LW can expect updates on all that.

Disclaimer: just as authors are not their characters, so too is "Duncan trying to show how X would be expressed under a particular set of norms" not the same as "Duncan asserting X."  I have not, in the above, represented my stance on all of this, just tried to meet Kerry's curiosity/hope about norms of discourse.

My apologies to Viliam for the presumption, especially if I somehow strawmanned or misrepresented Viliam's points.  Viliam is not (exactly) to blame for my own interpretations and projections based on reading the above comment.

Replies from: Viliam, kerry-vaughan
comment by Viliam · 2021-11-10T13:43:23.253Z · LW(p) · GW(p)

For the record, real-Viliam approves that this version mostly correctly (see below) captures the spirit of the original comment, with mixed opinion (slightly more positive than negative) on the style.

Nitpicking:

A little ironic to e.g. write a big post about this and put it on LessWrong (like a FB post about how you're leaving FB), but that does seem like a start.

This thought never crossed my mind. If LW comments on Leverage, it makes perfect sense for Leverage to post a response on LW.

I think that "we get to divorce from the rationalists without leaving the Bay and changing the name of the org and changing our recruitment and donor pools and so on and so forth" might be a fabricated option [LW · GW].

This might be true per se, but is not what I tried to say. By "also, burn down the old website, and rename the organization" I tried (and apparently failed [? · GW]) to say that in my opinion, actions of Geoff/Leverage make more sense when interpreted as "hide the evidence of past behavior" rather than "make it obvious that we are not rationalists".

In my opinion (sorry if this is too blunt), Geoff may be the kind of actor who creates good impressions in short term and bad impressions in long term, and some of his actions make sense as an attempt to disconnect his reputation from his past actions. (This could start another long debate. In general, I support the "right to be forgotten" when it refers to distant past, or there is good evidence that the person has changed substantially; but it can also be used as a too convenient get-out-of-jail-free card. Humans gossip for a reason. Past behavior is the best predictor of future behavior.)

comment by Kerry Vaughan (kerry-vaughan) · 2021-11-10T14:10:03.745Z · LW(p) · GW(p)

Thanks a lot for taking the time to write this. The revised version makes it clearer to me what I disagree with and how I might go about responding.

An area of overlap that I notice between Duncan-norms and LW norms are sentences like this:
 

(This is not me being super charitable, but: it seems to me that the whole demons-and-crystals thing, which so far has not been refuted, to my knowledge, is also a start.  /snark)

Where the pattern is something like: "I know this is uncharitable/rude, but [uncharitable/rude thing]. Where I come from the caveat isn't understood to do any work. If I say "I know this is rude, but [rude thing]" I expect the recipient to take offense to roughly the same degree as if there was no caveat at all, and I expect the rudeness to derail the recipient's ability to think about the topic to roughly the same degree.

If you're interested, I'd appreciate the brief argument for thinking that it's better to have norms that allow for saying the rude/uncharitable thing with a caveat instead of having norms that encourage making a similar point with non-rude/charitable comments instead.

 

Replies from: Duncan_Sabien, Viliam
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-10T19:52:59.821Z · LW(p) · GW(p)

Happy to try.

There are sort of two parts to this, but they overlap and I haven't really teased them apart, so sorry if this is a bit muddled.

I think there's a tension between information and adherence-to-norms.

Sometimes we have a rude thought.  Like, it's not just that its easiest expression is rude, it's that the thought itself is fundamentally rude.  The most central example imo is when you genuinely think that somebody is wrong about themselves/their own thought processes/engaging in self-deception/in the grips of a blind spot.  When your best hypothesis is that you actually understand them better than they understand themselves.

It's not really possible to say that in a way that doesn't contain the core sentiment "I think I know better than you," here.  You can do a lot of softening the blow, you can do a lot of hedging, but in the end, you're either going to share your rude information, or you are going to hide your rude information.

Both LW culture and Duncan culture have a strong, endorsed bias toward making as much information shareable as possible.

Duncan culture, at least (no longer speaking for LW) also has a strong bias toward doing things which preserve and strengthen the social fabric.

(Now we're into part two.)

If I express a fundamentally rude thought, but I do so in a super careful hedged and cautious way with all the right phrases and apologies, then what often happens is that the other person feels like they cannot be angry.

They've still been struck, but they were struck in a way that causes everyone else to think the striking was measured and reasonable, and so if they respond with hurt and defensiveness, they'll be the one to lose points.

Even though they were the one who was "attacked," so to speak.

A relevant snippet from another recent comment of mine:

Look, there's this thing where sometimes people try to tell each other that something is okay. Like, "it's okay if you get mad at me."

Which is really weird, if you interpret it as them trying to give the other person permission to be mad.

But I think that's usually not quite what's happening? Instead, I think the speaker is usually thinking something along the lines of:

Gosh, in this situation, anger feels pretty valid, but there's not universal agreement on that point—many people would think that anger is not valid, or would try to penalize or shut down someone who got mad here, or point at their anger in a delegitimizing sort of way. I don't want to do that, and I don't want them to be holding back, out of a fear that I will do that. So I'm going to signal in advance something like, "I will not resist or punish your anger." Their anger was going to be valid whether I recognized its validity or not, but I can reduce the pressure on them by removing the threat of retaliation if they choose to let their emotions fly.

Similarly, yes, it was obvious that the comment was subjective experience. But there's nevertheless something valuable that happens when someone explicitly acknowledges that what they are about to say is subjective experience. It pre-validates someone else who wants to carefully distinguish between subjectivity and objectivity. It signals to them that you won't take that as an attack, or an attempt to delegitimize your contribution. It makes it easier to see and think clearly, and it gives the other person some handles to grab onto. "I'm not one of those people who's going to confuse their own subjective experience for objective fact, and you can tell because I took a second to speak the shibboleth."

So, as I see it, the value in "I admit this is bad but I'm going to do the bad thing" is sort of twofold.

One, it allows people to share information that they would otherwise be prevented from sharing, including "prevented by not having the available time and energy to do all of the careful softening and hedging."  Not everyone has the skill of modeling the audience and speaking diplomatically, and there's value in giving those people a path to saying their piece, but we don't want to abandon norms of politeness and so an accepting-of-the-costs and a taking-of-lumps is one way to allow that data in.

And two, it removes barriers in the way of appropriate pushback.  By acknowledging the rudeness up front, you embolden the people who were offended to be offended in a way that will tend to delegitimize them less.  You're sort of disentangling your action from the norms.  If you just say a rude thing and defend it because "whatev, it's true and justified," then you're also incrementally weakening a bunch of structures that are in place to protect people, and protect cooperation.  But if you say something like "I am going to say a thing that deserves punishment because it's important to say, but then also I will accept the punishment," you can do less damage to the idea that it's important to be polite and charitable in the first place.

Replies from: Vladimir_Nesov, kerry-vaughan
comment by Vladimir_Nesov · 2021-11-11T00:48:52.265Z · LW(p) · GW(p)

tension between information and adherence-to-norms

This mostly holds for information pertaining to norms. Math doesn't need controversial norms, there is no tension there. Beliefs/claims that influence transmission of norms are themselves targeted by norms, to ensure systematic transmission. This is what anti-epistemology is, it's doing valuable work in instilling norms, including norms for perpetuating anti-epistemology.

So the soft taboo on politics is about not getting into a subject matter that norms care about. And the same holds for interpersonal stuff.

comment by Kerry Vaughan (kerry-vaughan) · 2021-11-10T20:37:25.479Z · LW(p) · GW(p)

OK, excellent this is also quite helpful. 

For both my own thought and in high-trust conversations I have a norm that's something like "idea generation before content filter" which is designed to allow one to think uncomfortable thoughts (and sometimes say them) before filtering things out. I don't have this norm for "things I say on the public internet" (or any equivalent norm). I'll have to think a bit about what norms actually seem good to me here.

I think I can be on board with a norm where one is willing to say rude or uncomfortable things provided they're (1) valuable to communicate and (2) one makes reasonable efforts to nevertheless protect the social fabric and render the statement receivable to the person to whom it is directed. My vague sense of comments with the "I know this is uncharitable/rude, but [uncharitable/rude thing]" is that more than half of the time I think the caveat insulates the poster from criticism and does not meaningfully protect the social fabric or help the person to whom the comments are directed, but I haven't read such comments carefully.

In any case, I now think there is at least a good and valid version of this norm that should be distinguished from abuses of the norm.

comment by Viliam · 2021-11-10T22:59:02.739Z · LW(p) · GW(p)

If I tried to make it explicit, I guess the rudeness disclaimer means that the speaker believed there was a politeness-clarity tradeoff, and decided to sacrifice politeness in order to maximize clarity.

If the observer appreciates the extra clarity, and thinks the sacrifice was worth it, the rudeness disclaimer serves as a reminder that they might want to correspondingly reduce the penalty they typically assign for rudeness.

Depending on context, the actually observer may be the addressee and/or third party. So, if the disclaimer has no effect on you, maybe you were not its intended audience. For example, people typically don't feel grateful for being attacked more clearly.

.

That said, my speech norms are not Duncan's speech norms. From my perspective, if the tone of the message is incongruent with its meaning, it feels like a form of lying. Strong emotions correspond to strong words; writing like a lawyer/diplomat is an equivalent of talking like a robot. (And I don't believe that talking like robots is the proper way for rationalists to communicate.) Gestures and tone of voice are also in theory not necessary to deliver the message.

From my perspective, Duncan-speech is more difficult to read; it feels like if I don't pay sufficient attention to some words between the numerous disclaimers, I may miss the entire point. It's like the text is "no no no no (yes),  no no no no (yes), no no no no (yes)", and if you pay enough attention, you may decipher that the intended meaning is "(yes, yes, yes)", but if the repeated disclaimers make you doze off, you might skip the important parts and conclude that he was just saying "no no no no". But, dunno, perhaps if you practice this often, the encoding and decoding happens automatically. I mean, this is not just about Duncan, I also know other people who talk like this, and they seem to understand each other with no problem, it's just me who sometimes needs a translator.

I am trying to be more polite than what is my natural style, but it costs me some mental energy, and sometimes I am just like fuck this. I prefer to imagine that I am making a politeness-clarity tradeoff, but maybe I'm just rationalizing, and using a convenient excuse to indulge in my baser instincts. Upvote or downvote at your own discretion. I am not even arguing in favor of my style; perhaps I am wrong and shouldn't be doing this; I am probably defecting at some kind of Prisonner's Dilemma. I am just making it clear that not only I do not follow Duncan's speech norms, but I also disagree with them. (That is, I disagree with the idea that I should follow them. I am okay with Duncan following his own norms.)

.

EDIT: I am extremely impressed by Duncan's comment [LW(p) · GW(p)], which I didn't read before writing this. On reflection, this feels weird, because it makes me feel that I should take Duncan's arguments more seriously... potentially including his speech norms... oh my god... I probably need to sleep on this.

Replies from: kerry-vaughan
comment by Kerry Vaughan (kerry-vaughan) · 2021-11-10T23:51:15.809Z · LW(p) · GW(p)

This comment is excellent. I really appreciate it. 

I probably share some of your views on the "no no no no (yes),  no no no no (yes), no no no no (yes)" thing, and we don't want to go too far with it, but I've come to like it more over time. 

(Semi-relatedly: I think I rejected the sequences unfairly when I first encountered them early on for something like this kind of stylistic objection. Coming from a philosophical background I was like "Where are the premises? What is the argument? Why isn't this stated more precisely?" Over time I've come to appreciate the psychological effect of these kinds of writing styles and value that more than raw precision.)

comment by Rob Bensinger (RobbBB) · 2021-11-10T02:57:38.797Z · LW(p) · GW(p)

FWIW I downvoted Viliam's comment soon after he posted it, and have strong-downvoted it now that it has more karma.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-10T05:40:04.875Z · LW(p) · GW(p)

I, on the other hand, strong-upvoted it (and while I didn’t downvote Kerry’s reply, I must say that I find such “why aren’t you downvoting this comment, guys? doesn’t it break the rules??” comments to be obnoxious in general).

Replies from: sil-ver, kerry-vaughan
comment by Rafael Harth (sil-ver) · 2021-11-10T22:46:01.597Z · LW(p) · GW(p)

I find this kind of question really valuable. The karma system has massive benefits, but it can also be emotionally tough, and especially so for people with status regulating emotions. In my experience, discussing reasons for voting explicitly usually makes me feel better about it, even though I don't have a gears model of why that is, I'm just reporting on observed data points. Maybe because it provides affirmation that we're basically all trying to do the right thing rather than fight some kind of zero sum game.

comment by Kerry Vaughan (kerry-vaughan) · 2021-11-10T14:33:41.381Z · LW(p) · GW(p)

That seems basically fair. 

An unendorsed part of my intention is to complain about the comment since I found it annoying. Depending on how loudly that reads as being my goal, my comment might deserve to be downvoted to discourage focusing the conversation on complaints of this type.

The endorsed part of my intention is that the LW conversations about Leverage 1.0 would likely benefit from commentary by people who know what actually went on in Leverage 1.0. Unfortunately, the set of "people who have knowledge of Leverage 1.0 and are also comfortable on LW" is really small. I'm trying to see if I am in this set by trying to understand LW norms more explicitly. This is admittedly a rather personal goal, and perhaps it ought to be discouraged for that reason, but I think indulging me a little bit is consonant with the goals of the community as I understand them.

Also, to render an implicit thing I'm doing explicit, I think I keep identifying myself as an outsider to LW as a request for something like hospitality. It occurs to me that this might not be a social form that LW endorses! If so, then my comment probably deserves to be downvoted from the LW perspective.

Replies from: Viliam
comment by Viliam · 2021-11-11T00:49:04.620Z · LW(p) · GW(p)

I hope you will feel comfortable here. I think you are following the LW norms quite okay. You seem to take the karma too seriously, but that's what new users are sometimes prone to do; karma is an important signal, but it also inevitably contains noise; in long term it usually seems to work okay. If that means something for you, your comments are upvoted a lot.

I apologize for the annoying style of my comment. I will try to avoid doing so in the future, though I cannot in good faith make a promise to do so; sorry about that.

I sincerely believe that Geoff is a dangerous person, and I view his actions with great suspicion. This is not meant as an attack on you. Feel free to correct me whenever I am factually wrong; I prefer being corrected to staying mistaken. (Also, thanks to both Rob and Said for doing what they believed was the right thing.)

Unfortunately, the set of "people who have knowledge of Leverage 1.0 and are also comfortable on LW" is really small.

[Biting my tongue hard to avoid a sarcastic response. Trying to channel my inner Duncan. Realizing that I am actually trying to write a sarcastic response using mock-Duncan's voice. Sheesh, this stuff is difficult... Am I being meta-sarcastic now? By the way, Wikipedia says that sarcasm is illegal in North Korea; I am not making this up...]

I am under impression that (some) Leverage members signed non-disclosure agreements. Therefore, when I observe the lack of Leverage supporters on LW, there are at least two competing explanations matching the known data, and I am not sure how to decide which one is closer to reality:

  • rationalist community and LW express negative attitude towards people supporting Leverage, so they avoid the environment they perceive as unfriendly;
  • people involved with Leverage cannot speak openly about Leverage... maybe only about some aspects of it, but not discussing Leverage at all helps them stay on the safe side;

and perhaps, also some kind of "null hypothesis" is worth considering, such as:

  • LW only attracts a small fraction of the population; only a few people have insider knowledge of Leverage; it is not unlikely that the intersection of these two sets just happens to be empty.

Do I understand you correctly as suggesting that the negative attitude of LW towards Leverage is the actual reason why we do not have more conversations about Leverage here? I am aware of some criticism of the Connection Theory on LW; is this what you have in mind, or something else? (Well, obviously the Zoe's article, but that only happened recently so it can't explain the absence of Leverage supporters before that.)

To me it seems that the combination of "Geoff prefers some level of secrecy about Leverage activities" + "Connection Theory was not well received on LW" + "there are only a few people in Leverage anyway" is a sufficient explanation of why the Leverage voices have been missing on LW. Do you have some evidence that contradicts this?

comment by ChristianKl · 2021-11-09T10:20:22.879Z · LW(p) · GW(p)

Why didn't you approach the CFAR junior staff openly, like: "hey guys, I think rationality is boring, wanna exorcise demons instead?" 

That assumes that Geoff was interested in exorcising demons when nothing of the public information indicates that.

In the latest Twitch stream Geoff talked about trying to figure out whether someone actually did a seance and after him asking people nobody said they did and rumors of them having an Aleister Crowley book club also didn't seem to have been confirmed to him.

The interactions with the energy healers seem to have been in 2018/19 which is long after the relationships sourred.