Posts

Comments

Comment by Kerry Vaughan (kerry-vaughan) on Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation · 2021-11-11T20:32:32.707Z · LW · GW

'Phenomenal consciousness exists'.

Sorry if this comes off as pedantic, but I don't know what this means. The philosopher in me keeps saying "I think we're playing a language game," so I'd like to get as precise as we can. Is there a paper or SEP article or blog post or something that I could read which defines the meaning of this claim or the individual terms precisely? 

Because the logical structure is trivial -- Descartes might just as well have asked 'could a deceiver make 2 + 2 not equal 4?'

[...]

I'd guess also truths of arithmetic, and such? If Geoff is Bayesian enough to treat those as probabilistic statements, that would be news to me!

I don't know Geoff's view, but Descartes thinks he can be deceived about mathematical truths (I can dig up the relevant sections from the Meditations if helpful). That's not the same as "treating them as probabilistic statements," but I think it's functionally the same from your perspective. 

The project of the Meditations is that Descartes starts by refusing to accept anything which can be doubted and then he tries to nevertheless build a system of knowledge from there. I don't think Descartes would assign infinite certainty to any claim except, perhaps, the cogito.

Comment by Kerry Vaughan (kerry-vaughan) on Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation · 2021-11-11T03:12:55.695Z · LW · GW

On reflection, it seems right to me that there may not be a contradiction here. I'll post something later if I conclude otherwise.

(I think I got a bit too excited about a chance to use the old philosopher's move of "what about that claim itself.")

Comment by Kerry Vaughan (kerry-vaughan) on Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation · 2021-11-11T01:56:11.705Z · LW · GW

It's not clear what "I" means here . . .

Oh, sorry, this was a quote from Descartes that is the closest thing that actually appears in Descartes to "I think therefore I am" (which doesn't expressly appear in the Meditations)

Descartes's idea doesn't rely on any claims about persistent psychological entities (that would require the supposition of memory, which Descartes isn't ready to accept yet!). Instead, he postulates an all-powerful entity that is specifically designed to deceive him and tries to determine whether anything at all can be known given that circumstance. He concludes that he can know that he exists because something has to do the thinking. Here is the relevant quote from the Second Meditation:

I have convinced myself that there is absolutely nothing in the world, no sky, no earth, no minds, no bodies. Does it now follow that I too do not exist? No: if I convinced myself of something then I certainly existed. But there is a deceiver of supreme power and cunning who is deliberately and constantly deceiving me. In that case I too undoubtedly exist, if he is deceiving me; and let him deceive me as much as he can, he will never bring it about that I am nothing so long as I think that I am something. So after considering everything very thoroughly, I must finally conclude that this proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind.

I find this pretty convincing personally. I'm interested in whether you think Descartes gets it wrong even here or whether you think his philosophical system gains its flaws later.


More generally, I'm still not quite sure what precise claims or what type of claim you predict you and Geoff would disagree about. My-model-of-Geoff suggests that he would agree with "it seems fine to say that there's some persistent psychological entity roughly corresponding to the phrase "Rob Bensinger"." and that "thinking", "experience", etc." pick out "real" things (depending on what we mean by "real").

Can you identify a specific claim type where you predict Geoff would think that the claim can be known with certainty and you would think otherwise?

Comment by Kerry Vaughan (kerry-vaughan) on Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation · 2021-11-11T01:31:23.695Z · LW · GW

I don't think people should be certain of anything

What about this claim itself?

Comment by Kerry Vaughan (kerry-vaughan) on Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation · 2021-11-10T23:51:15.809Z · LW · GW

This comment is excellent. I really appreciate it. 

I probably share some of your views on the "no no no no (yes),  no no no no (yes), no no no no (yes)" thing, and we don't want to go too far with it, but I've come to like it more over time. 

(Semi-relatedly: I think I rejected the sequences unfairly when I first encountered them early on for something like this kind of stylistic objection. Coming from a philosophical background I was like "Where are the premises? What is the argument? Why isn't this stated more precisely?" Over time I've come to appreciate the psychological effect of these kinds of writing styles and value that more than raw precision.)

Comment by Kerry Vaughan (kerry-vaughan) on Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation · 2021-11-10T23:42:16.682Z · LW · GW

It seems to me that you're arguing against a view in the family of claims that include "It seems like the one thing I can know for sure is that I'm having these experiences" but I'm having trouble determining the precise claim you are refuting. I think this is because I'm not sure which claims that are meant precisely and which are meant rhetorically or directionally. 

Since this is a complex topic which lots of potential distinctions to be made, it might be useful to determine your views on a few different claims in the family of "It seems like the one thing I can know for sure is that I'm having these experiences" to determine where the disagreement lies.

Below are some claims in this family. Can you pinpoint which you think are fallible and which you think are infallible (if any)? Assuming that many or most of them are fallible can you give me a sense of something like "how susceptible to fallibility" you think they are? (Also if you don't mind, it might be useful to distinguish your views from what your-model-of-Geoff thinks to help pinpoint disagreements.) Feel free to add additional claims if they seem like they would do a better job of pinpointing the disagreement.

  1. I am, I exist (i.e., the Cartesian cogito).
  2. I am thinking.
  3. I am having an experience.
  4. I am experiencing X.
  5. I experienced X.
  6. I am experiencing X because there is an X-producing thing in the world.
  7. I believe X.
  8. I am having the experience of believing X.

Edit: Wrote this before seeing this comment, so apologies if this doesn't interact with the content there.

Comment by Kerry Vaughan (kerry-vaughan) on Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation · 2021-11-10T20:59:29.173Z · LW · GW

Rob: Where does the reasoning chain from 1 to 3a/3b go wrong in your view? I get that you think it goes wrong in that the conclusions aren't true, but what is your view about which premise is wrong or why the conclusion doesn't follow from the premises?

In particular, I'd be really interested in an argument against the claim "It seems like the one thing I can know for sure is that I'm having these experiences."

Comment by Kerry Vaughan (kerry-vaughan) on Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation · 2021-11-10T20:37:25.479Z · LW · GW

OK, excellent this is also quite helpful. 

For both my own thought and in high-trust conversations I have a norm that's something like "idea generation before content filter" which is designed to allow one to think uncomfortable thoughts (and sometimes say them) before filtering things out. I don't have this norm for "things I say on the public internet" (or any equivalent norm). I'll have to think a bit about what norms actually seem good to me here.

I think I can be on board with a norm where one is willing to say rude or uncomfortable things provided they're (1) valuable to communicate and (2) one makes reasonable efforts to nevertheless protect the social fabric and render the statement receivable to the person to whom it is directed. My vague sense of comments with the "I know this is uncharitable/rude, but [uncharitable/rude thing]" is that more than half of the time I think the caveat insulates the poster from criticism and does not meaningfully protect the social fabric or help the person to whom the comments are directed, but I haven't read such comments carefully.

In any case, I now think there is at least a good and valid version of this norm that should be distinguished from abuses of the norm.

Comment by Kerry Vaughan (kerry-vaughan) on Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation · 2021-11-10T14:33:41.381Z · LW · GW

That seems basically fair. 

An unendorsed part of my intention is to complain about the comment since I found it annoying. Depending on how loudly that reads as being my goal, my comment might deserve to be downvoted to discourage focusing the conversation on complaints of this type.

The endorsed part of my intention is that the LW conversations about Leverage 1.0 would likely benefit from commentary by people who know what actually went on in Leverage 1.0. Unfortunately, the set of "people who have knowledge of Leverage 1.0 and are also comfortable on LW" is really small. I'm trying to see if I am in this set by trying to understand LW norms more explicitly. This is admittedly a rather personal goal, and perhaps it ought to be discouraged for that reason, but I think indulging me a little bit is consonant with the goals of the community as I understand them.

Also, to render an implicit thing I'm doing explicit, I think I keep identifying myself as an outsider to LW as a request for something like hospitality. It occurs to me that this might not be a social form that LW endorses! If so, then my comment probably deserves to be downvoted from the LW perspective.

Comment by Kerry Vaughan (kerry-vaughan) on Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation · 2021-11-10T14:10:03.745Z · LW · GW

Thanks a lot for taking the time to write this. The revised version makes it clearer to me what I disagree with and how I might go about responding.

An area of overlap that I notice between Duncan-norms and LW norms are sentences like this:
 

(This is not me being super charitable, but: it seems to me that the whole demons-and-crystals thing, which so far has not been refuted, to my knowledge, is also a start.  /snark)

Where the pattern is something like: "I know this is uncharitable/rude, but [uncharitable/rude thing]. Where I come from the caveat isn't understood to do any work. If I say "I know this is rude, but [rude thing]" I expect the recipient to take offense to roughly the same degree as if there was no caveat at all, and I expect the rudeness to derail the recipient's ability to think about the topic to roughly the same degree.

If you're interested, I'd appreciate the brief argument for thinking that it's better to have norms that allow for saying the rude/uncharitable thing with a caveat instead of having norms that encourage making a similar point with non-rude/charitable comments instead.

 

Comment by Kerry Vaughan (kerry-vaughan) on Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation · 2021-11-10T13:54:07.126Z · LW · GW

This is really helpful. Thanks!

Comment by Kerry Vaughan (kerry-vaughan) on Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation · 2021-11-10T00:12:42.603Z · LW · GW

As of writing (November 9, 2021) this comment has 6 Karma across 11 votes. As a newbie to LessWrong with only a general understanding of LessWrong norms, I find it surprising that the comment is positive. I was wondering if those who voted on this comment (or who have an opinion on it) would be interested in explaining what Karma score this comment should have and why.

My view based on my own models of good discussion norms is that the comment is mildly toxic and should be hovering around zero karma or in slightly negative territory for the following reasons:

  • I would describe the tone as “sarcastic” in a way that makes it hard for me to distinguish between what the OP actually thinks and what they are saying or implying for effect.
  • The post doesn’t seem to engage with Geoff’s perspective in any serious way. Instead, I would describe it as casting aspersions on a straw model of Geoff.
  • The post seems most focused on generating applause lights via condemnation of Geoff than trying to explain why Geoff is part of the Rationality community despite his protestation to the contrary. (I could imagine the comment which tries to weigh the evidence about whether Geoff ought to be considered part of the Rationality community even today, but this comment isn’t it).
  • The comment repeatedly implies that Leverage was devoted to activities like “fighting evil spirits,” “using touch healing,” “exorcising demons,” etc. even though (1) the post where that is described only covers 2017-2019; (2) doesn’t specify that this kind of activity was common or typical even of her sub-group or of her overall experience; and (3) specifically notes that most people at Leverage didn’t have this experience.

I don’t think the comment is more than mildly toxic because it does raise the valid consideration that Geoff does appear to have positioned himself as at least Rationalist-adjacent early on and because none of the offenses listed above are particularly heinous. I’m sure others disagree with my assessment and I’d be interested in understanding why.

[Context: I work at Leverage now, but didn’t during Leverage 1.0 although I knew many of the people involved. I haven’t been engaging with LessWrong recently because the discussion has seemed quite toxic to me, but Speaking of Stag Hunts and in particular this comment made me a little bit more optimistic so I thought I’d try to get a clearer picture of LessWrong’s norms.]


 

Comment by Kerry Vaughan (kerry-vaughan) on Zoe Curzi's Experience with Leverage Research · 2021-10-21T02:43:36.134Z · LW · GW

The most directly 'damning' thing, as far as I can tell, is Geoff pressuring people to sign NDAs.


I received an email from a Paradigm board member on behalf of Paradigm and Leverage that aims to provide some additional clarity on the information-sharing situation here. Since the email specifies that it can be shared, I've uploaded it to my Google Drive (with some names and email addresses redacted). You can view it here.

The email also links to the text of the information-sharing agreement in question with some additional annotations.

[Disclosure: I work at Leverage, but did not work at Leverage during Leverage 1.0. I'm sharing this email in a personal rather than a professional capacity.]

Comment by Kerry Vaughan (kerry-vaughan) on Common knowledge about Leverage Research 1.0 · 2021-10-01T23:00:59.804Z · LW · GW

Instead, what I'd be curious to know is whether they have the integrity to be proactively transparent about past mistakes, radically changed course when it comes to potentially harmful practices, and refrain from using any potentially harmful practices in cases where it might be advantageous on a Machiavellian-consequentialist assessment.

I think skepticism about nice words without difficult-to-fake evidence is warranted, but I also think some of this evidence is already available.

For example, I think it's relatively easy to verify that Leverage is a radically different organization today. The costly investments we've made in history of science research provide the clearest example as does the fact that we're no longer pursuing any new psychological research.

Comment by Kerry Vaughan (kerry-vaughan) on Common knowledge about Leverage Research 1.0 · 2021-09-30T00:15:58.643Z · LW · GW

This is a good point. I think I reacted too harshly. I've added an apology to the orthonormal to the original comment

Comment by Kerry Vaughan (kerry-vaughan) on Common knowledge about Leverage Research 1.0 · 2021-09-29T20:10:43.834Z · LW · GW

Assuming something like this represents your views Freyja, then I think you’ve handled the situation quite well. 

I hope you can see how that is quite different from the comment I was replying to which is someone who appears to have met Geoff once. I'm sure you can similarly imagine how you would feel if people made comments like the one from orthonormal about friends of yours without knowing them.

Comment by Kerry Vaughan (kerry-vaughan) on Common knowledge about Leverage Research 1.0 · 2021-09-29T02:52:18.640Z · LW · GW

What an incredibly rude thing to say about someone. I hope no one ever posts their initial negative impressions upon meeting you online for everyone to see.

Geoff Anders is a real person. Stop treating him like he's not.

Added: This comment was too harsh given the circumstance. My apologies to orthonormal for overreacting.

Comment by Kerry Vaughan (kerry-vaughan) on Common knowledge about Leverage Research 1.0 · 2021-09-28T23:02:46.855Z · LW · GW

Leverage keeps coming up because Geoff Anders (and associates) emit something epistemically and morally corrosive and are gaslighting the commons about it. And Geoff keeps trying to disingenuously hit the reset button and hide it, to exploit new groups of people. That’s what people are responding to and trying to counteract in posts like the OP.

This seems pretty unfair to me and I believe we’re trying quite hard to not hide the legacy of Leverage 1.0. For example, we (1) specifically chose to keep the Leverage name; (2) are transparent about our intention to stand up for Leverage 1.0; and (3) Geoff’s association with Leverage 1.0 is quite clear from his personal website. Additionally, given the state of Leverage’s PR after Leverage 1.0 ended, the decision to keep the name was quite costly and stemmed from a desire to preserve the legacy of Leverage 1.0.

Comment by Kerry Vaughan (kerry-vaughan) on Common knowledge about Leverage Research 1.0 · 2021-09-28T14:11:43.154Z · LW · GW

I want to draw attention to the fact that "Kerry Vaughan" is a brand new account that has made exactly three comments, all of them on this thread. "Kerry Vaughan" is associated with Leverage. "Kerry Vaughan"'s use of "they" to describe Leverage is deliberately misleading.

I'm not hiding my connection to Leverage which is why I used my real name, mentioned that I work at Leverage in other comments, and used "we" in connection with a link to Leverage's case studies. I used "they" to refer to Leverage 1.0 since I didn't work at Leverage during that time.

Comment by Kerry Vaughan (kerry-vaughan) on Common knowledge about Leverage Research 1.0 · 2021-09-28T14:08:15.040Z · LW · GW

I don't think that's my account actually. It's entirely possible that I never created a LW account before now.

Comment by Kerry Vaughan (kerry-vaughan) on Common knowledge about Leverage Research 1.0 · 2021-09-28T01:05:06.745Z · LW · GW

This demand for secrecy is an blatant excuse used to obstruct oversight and to prevent peer review. What you're doing is the opposite of science.

Interestingly, "peer review" occurs pretty late in the development of scientific culture. It's not something we see in our case studies on early electricity, for example, which currently cover the period between 1600 and 1820. 

What we do see throughout the history is the norm of researchers sharing their findings with others interested in the same topics. It's an open question whether Leverage 1.0 violated this norm. On the one hand, they had a quite vibrant and open culture around their findings internally and did seek out others who might have something to offer to their project. On the other hand, they certainly didn't make any of this easily accessible to outsiders. I'm inclined to think they violated some scientific norms in this regard, but I think the work they were doing is pretty clearly science albeit early stage science.

Comment by Kerry Vaughan (kerry-vaughan) on Common knowledge about Leverage Research 1.0 · 2021-09-28T00:55:57.788Z · LW · GW

I think the way the term cult (or euphemisms like “high-demand group”) has been used by the OP and by many commenters in this thread is extremely unhelpful and, I suspect, not in keeping with the epistemic standards of this community.

At its core, labeling a group as a cult is an out-grouping power move used to distance the audience from that group’s perspective. You don’t need to understand their thoughts, explain their behavior, form a judgment on their merits. They’re a cult. 

This might be easier to see when you consider how, from an outside perspective, many behaviors of the Rationality community that are, in fact, fine might seem cultish. Consider, for example, the numerous group houses, hero-worship of Eliezer, the tendency among Rationalists to hang out only with other Rationalists, the literal take over the world plan (AI), the prevalence of unusual psychological techniques (e.g., rationality training, circling), and the large number of other unusual cultural practices that are common in this community. To the outside world, these are cult-like behaviors. They do not seem cultish to Rationalists because the Rationality community is a well-liked ingroup and not a distrusted outgroup.  

My understanding is that historically the Rationality community has had some difficulty in protecting itself from parasitic bad actors who have used their affiliation with this community to cause serious harm to others. Given that context, I understand why revisiting the topic of early Leverage might be compelling. I would suggest that the cult/no cult question will not be helpful here because the answer depends so largely on whether people liked or didn’t like Leverage. I think past events should demonstrate that this is not a reliable indicator of parasitic bad actors.

Some questions I would ask instead include: Did this group represent that they were affiliated with Rationality in order to achieve their ends? If so, did they engage in activities that are contrary to the norms of the Rationality community? Were people harmed by this group? If so, was that harm abnormal given the social context? Was that harm individual or institutional? Did those involved act responsibly given the circumstances? Etc. 

Given my knowledge of Leverage 1.0 and my knowledge of the Rationality community, I am quite confident that Leverage was not the parasitic bad actor that you are looking for, but I think this is something the Rationality community should determine for itself and this seems like a fine time to do so.

However, I would also like to note that Leverage 1.0 has historically been on the receiving end of substantial levels of bullying, harassment, needless cruelty, public ridicule, and more by people who were not engaged in any legitimate epistemic activity. I do not think this is OK. I intend to call out this behavior directly when I see it. I would ask that others do so as well.

(I currently work at Leverage research but did not work at Leverage during Leverage 1.0 (although I interacted with Leverage 1.0 and know many of the people involved). Before working at Leverage I did EA community building at CEA between Summer 2014 and early 2019.)


 

Comment by Kerry Vaughan (kerry-vaughan) on Common knowledge about Leverage Research 1.0 · 2021-09-28T00:42:16.578Z · LW · GW

I appreciate the edit, Viliam.

I know that it was a meme about Leverage 1.0 that it was impossible to understand, but I think that is pretty unfair today. If anyone is curious here are some relevant links:

We're no longer engaged with the Rationality community so this information might not have become common knowledge. Hopefully, this helps.