We must be very clear: fraud in the service of effective altruism is unacceptable

post by evhub · 2022-11-10T23:31:06.422Z · LW · GW · 56 comments

56 comments

Comments sorted by top scores.

comment by johnswentworth · 2022-11-11T00:39:29.029Z · LW(p) · GW(p)

While I don't disagree with the object-level point of this post, I generally think things of the form "We should all condemn X!" belong on social media, not on LessWrong.

"Let's all condemn X" is a purely political topic for most values of X. This post in particular is worded in a way which gives a very strong vibe of encouraging groupthink, and of encouraging soldier-mindset (i.e. the counterpart to scout mindset), and of encouraging people to play simulacrum level 3+ games [? · GW] rather than focus on physical reality. In short, it is exactly the sort of thing which I do not want on LessWrong, even when I agree with the goals it's ultimately trying to achieve.

Strong downvoted.

Replies from: evhub, ege-erdil, Viliam
comment by evhub · 2022-11-11T01:30:45.899Z · LW(p) · GW(p)

I'm not sure what your view is on the utility of LessWrong as a medium, but I think it's primarily useful as a way to build common knowledge, share information, and coordinate as a community. In that respect, I think this is an extremely important thing for us to coordinate and build common knowledge on.

Replies from: johnswentworth, 1a3orn, habryka4, Benito, MondSemmel
comment by johnswentworth · 2022-11-11T02:38:57.853Z · LW(p) · GW(p)

LW's stated mission, IIRC, is roughly "accelerate intellectual progress on the important problems facing humanity", and I think that's a basically-accurate description of the value LessWrong provides. The primary utility of LessWrong is in cultural norms and a user base conducive to that mission.

For example, comment boxes on every frontpage post have these guidelines:

  • Aim to explain, not persuade
  • Try to offer concrete models and predictions
  • If you disagree, try getting curious about what your partner is thinking
  • Don't be afraid to say 'oops' and change your mind

LessWrong's primary utility is a culture which makes things like that part of a natural, typical communication style.

I would say that a core part of that culture is to generally try to stay on low simulacrum levels [? · GW] - talk literally and directly about our actual models of the world, and mostly not choose our words as moves in a social game. Insofar as simulacrum level 3 is a coordination strategy [LW · GW], that means certain kinds of coordination need to happen somewhere else besides LessWrong. And at current margins, that's a very worthwhile tradeoff! By default, humans tend to turn every available communication channel into a coordination battleground, so there are few spaces out there which stay at low simulacrum levels, and the marginal value of such spaces is therefore quite high. Thus the value of LessWrong: it's primarily a forum for intellectual progress, i.e. improving our own understanding, not a forum for political coordination.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2022-11-11T09:31:22.452Z · LW(p) · GW(p)

While it's true that simulacrum level 3 is a coordination strategy, I feel that we should be able to build a community that can coordinate while staying on simulacrum level 1. This means we're allowed to say things like, "I publicly commit to following a system of norms where [something, e.g. using fraud for EA funding] is prohibited". That is, replace "playing a game while pretending to talk about facts" to "playing a game while being very explicit about the rules and the moves". Maybe Evan's choice of language was suboptimal in some ways, but there needs to be some way to say it that doesn't have to be banished to social media. Among other reasons, I don't want to rely on social media for anything and personally I don't use or follow social media at all (I don't even have an account on anything except linkedin).

comment by 1a3orn · 2022-11-11T02:07:42.566Z · LW(p) · GW(p)

Yes, and to expand only slightly: Coordinating against dishonest agents or practices is an extremely important part of coordination in general; if you cannot agree on removing dishonest agents or practices from your own group, the group will likely be worse at accomplishing goals; groups that cannot remove dishonest instances will be correctly distrusted by other groups and individuals.

All of these are important and worth coordinating on, which I think sometimes means "Let's condemn X" makes sense even though the outside view suggests that many instances of "Let's condemn X" are bad. Some inside view is allowed.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2022-11-11T03:11:12.043Z · LW(p) · GW(p)

if you cannot agree on removing dishonest agents or practices from your own group

What group, though? I'm not aware of Sam Bankman-Fried having posted on Less Wrong (a website for hosting blog posts on the subject matter of human rationality). If he did write misleading posts or comments on this website, we should definitely downvote them! If he didn't, why is this our problem?

(That is to say less rhetorically, why should this be our problem? Why can't we just be a website where anyone can post articles about probability theory or cognitive biases, rather than an enforcement arm of the branded "EA" movement, accountable for all its sins?)

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2022-11-12T23:09:31.721Z · LW(p) · GW(p)

because it is branded with the ea movement by being lesswrong.com. it cannot be unbranded except by changing the associations people actually make. the true position in the latent network relationships online makes it associated. you may not be aware of your position in a larger organism, but that doesn't mean you aren't in one just because you only want to focus on the contents of your own cell; if you insist on not thinking about the larger organisms you participate in then that's alright, but it makes you a skin cell, not a nerve cell.

edit: I suppose a basic underlying viewpoint I have is that all signaling is done by taking actions, and the only actions worth taking are ones that send signals into the universe that shape the universe towards the forms you wish it to have. lifting something off the ground is signaling, and signaling is measured in watts. false signals are lying, don't do those, they're worse than useless - putting map signals into another brain that do not match the signals you're sending into the territory is dishonesty, and the false signals themselves are the thing which is under question which need to be repaired into honesty by example.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2022-11-13T09:22:02.909Z · LW(p) · GW(p)

because it is branded with the ea movement by being lesswrong.com.

What does the name "lesswrong" have to do with EA? There's a certain overlap between the two communities, but LessWrong's mission has nothing to do with EA specifically. To the extent that it has any mission other than the one on its face, raising the sanity waterline, then historically that mission — Eliezer's mission — was to get people to think properly about AI and avert the coming doom.

FWIW, I am not and never have been an EA and do not read or participate in EA forums, but I've been on LW since it began on OvercomingBias. If it became "an enforcement arm of the branded "EA" movement, accountable for all its sins" I would leave.

comment by habryka (habryka4) · 2022-11-11T02:35:43.195Z · LW(p) · GW(p)

I agree that there is value in common-knowledge building, but there is a difference between doing something that feels social-miasma or simulacrum-level 3 shaped where you assert that "WE ALL AGREE THAT X" vs. you argue that something is a good idea, and you currently believe lots of other people believe the same.

I think coordinating against dishonest practices is important, but I don't think in order to do that we have to move away from just making primarily factual statements or describing your own belief state, and have to invent some kind of group-level belief.

Replies from: evhub
comment by evhub · 2022-11-11T02:43:52.557Z · LW(p) · GW(p)

Where do you think I make any claims that “everyone agrees X” as opposed to “I think X”? In fact, rereading my own writing, I think I was quite clear that everything therein was my view and my view alone.

Replies from: habryka4
comment by habryka (habryka4) · 2022-11-11T03:49:09.148Z · LW(p) · GW(p)

I think the title is the biggest problem here:

We must be very clear: fraud in the service of effective altruism is unacceptable

There is no "I think" here, no "I believe". At least to me it feels very much like a warcry instead of a statement about the world.

to make clear that we don't support fraud in the service of effective altruism.

This is also a call to action to change some kind of collective belief. I agree that you might have meant "we individually don't support fraud", but the "in the service of effective altruism" gives me a sense of this being a reference to a collective belief of effective altruism.

I do agree you have overall been pretty clear, and I appreciate the degree to which you ground things in your personal beliefs, but I do think the title as well as the central call to action of the post goes against that.

Replies from: evhub
comment by evhub · 2022-11-11T03:57:34.284Z · LW(p) · GW(p)

I agree that the title does directly assert a claim without attribution, and that it could be misinterpreted as a claim about what all EAs think should be done rather than just what I think should be done. It's a bit tricky because I want the title to be very clear, but am quite limited in the words I have available there.

I think the latter quote is pretty disingenuous—if you quote the rest of that sentence, the beginning is “I think the best course of action is”, which makes it very clear that this is a claim about what I personally believe people should do:

Right now, I think the best course of action is for us—and I mean all of us, anyone who has any sort of a public platform—to make clear that we don't support fraud in the service of effective altruism.

To be clear, “in the service of effective altruism” there is meant to refer to fraud done for the purpose of advancing effective altruism, not that we have an obligation to not support fraud and that obligation is in the service of effective altruism.

Edit: To make that last point more clear, I chainged “to make clear that we don't support fraud in the service of effective altruism” to “to make clear that we don't support fraud done in the service of effective altruism”.

Replies from: habryka4
comment by habryka (habryka4) · 2022-11-11T05:03:57.784Z · LW(p) · GW(p)

I still get a strong feeling of group think every time I see the title of the post, and feel a strong sense of something invading into my thought-space in a way that feels toxic to me. For some reason this feels even stronger in the Twitter post you made:

I don't know, I just feel like this is some kind of call-to-action that is trying to bypass my epistemic defenses. 

Replies from: evhub, lahwran
comment by evhub · 2022-11-11T05:08:44.952Z · LW(p) · GW(p)

The Twitter post is literally just title + link. I don't like Twitter, and don't want to engage on it, but I figured posting this more publicly would be helpful, so I did the minimum thing to try to direct people to this post.

From my perspective, I find it pretty difficult to be criticized for a “feeling” that you get from my post that seems to me to be totally disconnected from anything that I actually said.

Replies from: habryka4, WilliamKiely, TekhneMakre
comment by habryka (habryka4) · 2022-11-11T05:33:19.894Z · LW(p) · GW(p)

Yeah, I am sorry. Like, I don't think I currently have the energy to try to communicate all the subtle things that feel wrong to me about this, but it adds up to something I quite dislike.

I wish I had a more crystallized quick summary that I expect to cross the inferential distance quickly, but I don't currently.

comment by WilliamKiely · 2022-11-15T02:08:04.415Z · LW(p) · GW(p)

FWIW when I first saw the title (on the EA Forum) my reaction was to interpret it with an implicit "[I think that] We must be very clear: fraud in the service of effective altruism is unacceptable".

Things generally don't just become true because people assert them to be--surely people on LW know that. I think habryka's concern that not including "I think" in the title is a big deal is overblown. Dropping "I think" from the title is reasonable IMO to make the title more concise; I don't anticipate it degrading the culture of LW. I also don't see how it "bypasses epistemic defenses." If the lack of inclusion of an "I think" in your title will worsen readers' epistemics, then those readers seem to be at great risk of getting terrible epistemics from seeing any news headlines.

I don't mean to say that there's not value in using more nuanced language, including "I think" and similar qualifications to be more precise with ones words, just that I think the karma/vote ratio your post received is an over-reaction to concern about posts of your style degrading the level one "Attempt to describe the world accurately" culture of LW.

comment by TekhneMakre · 2022-11-11T10:32:17.430Z · LW(p) · GW(p)

IDK where habryka is coming from, but to me the post is good, and the title is fine but gives a twinge from the words "We" and "must" and those words together. (Also the phrase "is unacceptable" also is implicitly speaking from a social-collective-objective perspective, if you know what I mean. Which is fine, but it contributes to the twinge.) Things that would, to me, decrease the twinge:

  • EAs should be....
  • EA must unambiguously not accept fraud...

That's a low-character-count way to be a bit more specific about who We is, to whom something Is Unacceptable. It's maybe not what you really mean, maybe you really mean something more complicated like "people who want to ambitiously do good in the world" or something, and you don't have a low-character way to say that, and "We" is aspirationally pointing at that. 

In the post you clarify

we--as people who unknowingly benefitted from it and whose work for the world was potentially used to whitewash it

and say

Right now, I think the best course of action is for us—and I mean all of us, anyone who has any sort of a public platform—to make clear that we don't support fraud done in the service of effective altruism.

Which is reasonable. The title though, by touching on the We, seems to me to "make it" a "decision that is the group's decision". 

comment by the gears to ascension (lahwran) · 2022-11-12T23:00:25.782Z · LW(p) · GW(p)

it sure is a call to action, your epistemic defenses had better be good enough to figure out that it is a good one, because it is, and it is correct to pressure you about it. the fact that you're uncertain about whether I am right does not mean that I am uncertain. it is perfectly alright to say you're not sure if I'm right. but being annoyed at people for saying you should probably come to this conclusion is not reasonable when that conclusion is simply actually objectively justified - instead say you will have to think about it because you aren't sure if you see the justification yet, or something, and remember that you don't get to exclude comments from affecting your reputation, ever. if there's a way you can encode your request for courtesy about required updates that better clarifies that you are in fact willing to make updates that do turn out to be important and critical moral cooperation policy updates, then finding that method of phrasing may in fact be positive expected value for the outside world because it would help people request moral updates of each other in ways that are not pushing too hard. but it is often correct to push. do not expect people to make an exception because the phrasing was too much pressure.

comment by Ben Pace (Benito) · 2022-11-11T17:15:14.510Z · LW(p) · GW(p)

I think it's useful for that as well, but I think it's primarily a place to pursue truth and rationality ("the nameless virtue").

comment by MondSemmel · 2022-11-13T20:40:32.679Z · LW(p) · GW(p)

I think the main issue here is that Less Wrong is not Effective Altruism, and that many (at a guess, most) LW members are not affiliated with EA or don't consider themselves EAs. So from that perspective, while this post makes sense in the EA forum, it makes relatively little sense on LW, and to me looks roughly like being asked to endorse or disavow some politician X. (And if I extend the analogy, it's inevitably about a US politician even though I live in another country.)

So this specific EA forum post is just a poor fit for reposting on LW without a complete rewrite.

That said, the core sentiment (ethics; fraud is bad; the ends don't justify the means; etc.) obviously does have a place on LW, so there's probably a way to write a dispassionate current-events take on e.g. this post [LW · GW] from the Sequences.

comment by Ege Erdil (ege-erdil) · 2022-11-11T01:29:51.500Z · LW(p) · GW(p)

I wanted to thank you for writing this comment - while I have also been reasonably active on social media about this topic, and playing level 3+ games is sometimes necessary in the real world, I don't think this post actually offers any substantive content that goes beyond "fraud is bad and FTX was involved in fraudulent activities".

I agree that it's not a good fit for LW, though I think the post does fit in the EA Forum given recent events.

comment by Viliam · 2022-11-12T12:24:44.696Z · LW(p) · GW(p)

Yep, I also agree on the object level, but if the proposal is "we should collectively communicate something to public", then we should probably also get some feedback from people who have non-zero experience with communicating to public. Not about the message, but about its form.

For example, when I see people saying things like "We must all collectively condemn X", I take it as evidence that many people support X... otherwise there would be no need to go hysterical, right? If it was just one person, you might simply say: "hey, John Doe is not one of us, do not listen to him if he speaks in our name".

So in situations like this, we need to avoid not just lying, but also telling the truth in a way that predictably leads people to an opposite conclusion. ("They said X. In this business, when someone says X, they actually mean Y. Therefore, Y.") Speaking for myself, I have no idea how to do it, because I have zero expertise in this area. When someone proposes a communication strategy, I would like to know what is their experise.

Of course, speaking for themselves, anyone is free to say anything. But for speaking in the name of a community, it would be nice to know the rules for "speaking in the name of a community" before doing so. There are such things as protesting too much. There are such things as creating associations; you keep saying "X is not Y", and people remember "X is... uhm... somehow associated with Y".

comment by lc · 2022-11-11T01:14:29.018Z · LW(p) · GW(p)

Don't say "we must be very clear", please. Just say "Fraud in the service of effective altruism is unacceptable." You're not my boss.

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2022-11-12T23:01:14.026Z · LW(p) · GW(p)

[humor] don't say don't say, please. you're not my boss.

I do think it is reasonable to request a rephrase but I also would argue that it's actually objectively true that we must be very clear, and the frustration with the claim is understandable; saying it as a command statement might be slightly wrong decision theory, but it's only a little bit wrong. the commentary over at the effective altruism forum gets into the meat of the pros and cons of responding to this situation in various ways.

I would even say this is an example of a failed test case of attempting to take large amounts of control in order to make the world better, and is a nice example of why pivotal acts are almost inherently unsafe and should not occur.

comment by Thane Ruthenis · 2022-11-10T23:51:43.282Z · LW(p) · GW(p)

I fully agree. This is exactly what I was talking about regarding thermonuclear ideas [LW · GW], or what Eliezer alluded to here [LW · GW]. And if that's indeed what happened, well, it's going exactly as badly as was predicted.

comment by Dagon · 2022-11-11T01:03:23.997Z · LW(p) · GW(p)

I should wait a bit before responding, as my devils-advocate position is likely not popular, and I don't want to imply anything about SBF's motivation or morals when I have no knowledge of such.  I will, at some point, argue that that Utilitarianism (like many other Consequentialist belief systems) does support dishonesty in some situations in pursuit of better overall outcomes.

Replies from: MondSemmel
comment by MondSemmel · 2022-11-13T20:19:34.449Z · LW(p) · GW(p)

When you do, take note of this Yudkowsky post [LW · GW] from 2008, called "Ends Don't Justify Means (Among Humans)". And here's a related tweet.

comment by Robert Kennedy (istandleet) · 2022-11-11T03:22:51.069Z · LW(p) · GW(p)

I wish this post talked about object level trade offs. It did that somewhat with the reference to the importance of "have a decision theory that makes it easier to be traded with". However, the opening was extremely strong and was not supported:

I care deeply about the future of humanity—more so than I care about anything else in the world. And I believe that Sam and others at FTX shared that care for the world. Nevertheless, if some hypothetical person had come to me several years ago and asked “Is it worth it to engage in fraud to send billions of dollars to effective causes?”, I would have said unequivocally no.

What level of funding would make fraud worth it?

Edit to expand: I do not believe the answer is infinite. I believe the answer is possibly less than the amount I understand FTX has contributed (assuming they honor their commitments, which they maybe can't). I think this post gestures at trading off sacred values, in a way that feels like it signals for applause, without actually examining the trade.

comment by Lukas_Gloor · 2022-11-12T21:06:51.538Z · LW(p) · GW(p)

Interesting to see the different reactions to this post here vs the ea forum. 

comment by jacob_cannell · 2022-11-11T03:19:05.592Z · LW(p) · GW(p)

However, I think it is starting to look increasingly likely that, even if FTX's handling of its customer's money was not technically legally fraudulent, it seems likely to have been fraudulent in spirit.

Are you basing this accusation of fraud on that tweet (and related) from CZ?

If this was a mystery and I was the investigator, I'd focus on "follow the money" and ask "who has the most to gain?", and then notice that:

  1. FTX was pursuing a risky leveraged growth strategy and gaining on their main competitor: Binance/CZ
  2. FTX then became suddenly vulnerable during the crypto market crash due to said leverage
  3. CZ then sparked a rumor panic concerning FTX's liquidity/solvency
  4. CZ/Binance then came in as the white knight with an offer to save FTX (through a firesale)
  5. CZ/Binance then pulled out due to brief "corporate due diligence, as well as the latest news reports regarding mishandled customer funds and alleged US agency investigations", conveniently fulfilling the insolvency prophecy and destroying their main competitor
Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2022-11-11T04:30:04.025Z · LW(p) · GW(p)

Check out this Twitter thread if you're not sure that FTX did something seriously illegal and/or unethical.

ETA: It was written by “former Head of Institutional Sales at @ftx_official” who still had access to FTX internal Slack until very recently. And you can check out his followers to verify that it’s not an impersonation.

Replies from: jacob_cannell
comment by jacob_cannell · 2022-11-11T05:06:18.596Z · LW(p) · GW(p)

Yeah I'm definitely not sure, but I'm also doubting much anything I read on twitter will make me as sure about any of this as many others seem to be (although that thread is interesting). I have no stake in this saga; for all I know SBF is guilty of all those claims and more, but I just don't think the public info is very reliable in general and especially right at this moment.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2022-11-11T05:31:06.317Z · LW(p) · GW(p)

What about this WSJ story?

Also, I suggest applying some Bayesianism: if FTX had not touched customer funds, wouldn't you expect to see FTX people defending against such accusations on Twitter and other media, e.g., replying to the tweet I linked and responding to journalists' requests for comments? I'm not seeing anything like that.

Replies from: ChristianKl
comment by ChristianKl · 2022-11-11T11:14:47.840Z · LW(p) · GW(p)

FTX might be at a point where the general strategy is "Don't say anything for legal reasons" even if there are arguments worth to be made in its defense.

comment by the gears to ascension (lahwran) · 2022-11-12T23:04:10.550Z · LW(p) · GW(p)

y'all, reread your comments here and see if you can figure out why everyone is so worried that the ai safety people might be up to something nefarious. your immune systems are hyperactive about the idea of being asked to do something because it is moral. instead simply delete the part from your interpretation that labels this post as unfairly demanding and instead interpret it as a suggestion that we should be better at this sort of thing.

Replies from: Benito
comment by Ben Pace (Benito) · 2022-11-13T00:20:52.255Z · LW(p) · GW(p)

I am not being asked to do something because it is moral. I am being asked to do something because it is signaling. Evan is primarily telling me I'm obligated to do PR-control for EA, but that is something I do not actually care that much about and do not believe I am obligated to do, and that's why I strong-downvoted the post.

There is no moral question really here about fraud. From reading the post it seems that Evan is not that uncertain about whether I or Zvi Mowshowitz or Eliezer Yudkowsky or really any LessWronger is in favor of fraud in pursuit of Effective Altruism. He seems to me fairly confident of that assumption and is following it up by explicitly asking us to do as much signaling as we can ("and I mean all of us, anyone who has any sort of a public platform"). It currently reads to me that you have conflated 'being ethical' with 'signaling that you are ethical', and I think that's a pretty substantial mistake. 

I expect I will always downvote posts that command everyone to do this sort of signaling, I think they have almost no place on LessWrong. This is not a healthy way of coordinating people around ethical norms.

Edit: First paragraph was a mistake.

Replies from: Raemon, evhub, lahwran
comment by Raemon · 2022-11-13T01:00:47.270Z · LW(p) · GW(p)

fwiw I think gears' comment is sort of directionally right.

I think there is something important Oli and John were correct to be defending on LessWrong, and epistemic culture, and prevent things from moving towards higher simulacra levels, etc. But, also the LessWrong-style way of doing things feels kinda stunted at coordination. 

There's sort of a package deal that (much of) society offers on how to do moral coordination (see: "Simulacrum 3 As Stag-Hunt Strategy [LW · GW]"), which has a lot of problems, both epistemically, strategically and morally. My sense is that LW-er types are often trying to roll their own coordination schemes, and this will (hopefully) eventually result in something better and epistemic/lawfully grounded. But in the meantime it means there's a lot of obvious tools we don't have access to (including "interfacing morally/coordinationaly with much of the rest of the world", which is one of the key points of, well, morality and coordination). 

I endorse making the overall tradeoff, but it seems like it should come with more awareness that... like, we're making a tradeoff by having our memetic immune system trigger this hard. Not just uniformly choosing a better option. 

...

Followup note: there's a distinction between "what is right for LessWrong" and "what is right for the broader rationalsphere on EA Forum and Twitter and stuff." I think Oli had criticized both, separately. LessWrong is optimizing especially hard for epistemics/intellectual-progress as much as possible, and I think that's correct. It's less obvious to me whether it's bad that this post got 600 karma on EA Forum. In my dream world, the whole EAcosystem has better coordination and/or morality tech that doesn't route with Simulacrum 3 signaling games that are vulnerable to co-option. But I think we're still in an uncanny valley of coordination theory/practice and not sure what the right approach is for non-LW discourse, in the meantime.

I don't actually have a strong belief that the OP is good at the goal it's trying to accomplish. Just, the knee-jerk reaction to it feels like it has a missing mood to me.

Replies from: Benito
comment by Ben Pace (Benito) · 2022-11-13T05:23:01.927Z · LW(p) · GW(p)

(Upvote-disagree.)

comment by evhub · 2022-11-13T09:09:03.056Z · LW(p) · GW(p)

I am not being asked to do something because it is moral. I am being asked to do something because it is signaling. Evan is primarily telling me I'm obligated to do PR-control for EA, but that is something I do not actually care that much about and do not believe I am obligated to do, and that's why I strong-downvoted the post.

Seems like a pretty blatant misrepresentation of what I wrote. In justifying why I think you have an obligation to condemn fraud in the service of effective altruism, I say:

Assuming FTX's business was in fact fraudulent, I think that we—as people who unknowingly benefitted from it and whose work for the world was potentially used to whitewash it—have an obligation to condemn it in no uncertain terms.

That's pretty clearly a moral argument and not about PR at all.

Replies from: Benito, ChristianKl
comment by Ben Pace (Benito) · 2022-11-13T10:04:39.133Z · LW(p) · GW(p)

I think that's a mistake. Retracted. Will see if I can come back to this in the next day or two and clean up what I was saying a bit more.

comment by ChristianKl · 2022-11-13T09:27:10.006Z · LW(p) · GW(p)

You make a virtue ethics moral argument at a place that's dominated by utilitarian ethics.

comment by the gears to ascension (lahwran) · 2022-11-13T00:59:14.612Z · LW(p) · GW(p)

[oh man, my underuse of punctuation makes this hard to read. editing that now, sorry it's hard with voice recognition to add enough punctuation.]

I think I actually have a different view of moral communication than you - clarifying your stance after someone casts doubt on your personal commitment to honesty, via their demonstration that stances you have expressed yourself have lead them to concerning conclusions, warrants communicating clearly yourself by action and word that you disagree with their concerning conclusions, in order to ensure that the word bindings of your own moral philosophy are actually connected by example to the behaviors you wish your moral philosophy to refer to in the territory. map-words lose their meaning if not used to associate with the territory they're intended to; this "signaling", as you refer to it, is the task of showing-not-telling that your moral philosophy means what it's intended to after a major implementation of that moral philosophy reveals that it was corrupt.

To put it another way: someone has cast doubt on the algorithms "utilitarianism" and "ea", and so to the degree you share those algorithms, you now need to check for bugs in your implementation of them. Someone published an incident of a CVE in utilitarianism getting exploited (by SBF); it's important, after an error, to actually run the exception handler. one of the steps of the exception handler should in fact be to figure out what you wish to demand of others as part of your moral coprotection-establishment communication process; your insistence that it is not reasonable for others to have an edge of demand when they ask you to participate in clarifying ground rules of morality is understandable because of the pressure that can imply, but despite that I understand why you hesitate, I expect you to figure out how to concisely say that you are not ends-justify-the-means pure utilitarian.

For further discussion, I'd suggest reading the EA forums post - it goes into this stuff in detail in the comments and there are great discussions being had.

And to be clear, I do expect that there is some form of this moral self-description that you would already attempt to describe yourself as being bound to follow by honor and promise. I don't think that the allergic reaction folks have to someone being like "we must clarify our moral stance" is completely unwarranted, attempts to clarify moral stance overconfidently can in fact cause harm and themselves become risks to community security. But I think the level of allergic response is overcalibrated, and y'all should consider that the level of allergy to clarifying your viewpoint is in fact an example of a bad pattern in intelligent-being group social behavior. Verifying co-protection is hard, and its understandable for your immune system to have reactions to others' immune systems, but it is important to figure out how to participate in the multi-agent immune system in ways that are grounded, predictive, honest, accurate. in a significant sense, this sort of error checking is the multi-agent safety problem we need to solve for agi.

Replies from: Benito
comment by Ben Pace (Benito) · 2022-11-13T09:09:48.578Z · LW(p) · GW(p)

You make some reasonable points. I think it would be quite good for the EA ecosystem to now take steps to (a) make sure it isn't possible in the future (nor has already happened elsewhere) that someone who will take unethical action to get money and respect can gain this much power and leadership in the community, and (b) make costly, public signals that it thinks fraud is immoral and unacceptable, so that future trade partners are able to trust the morality of the EA ecosystem. I think there are healthy ways of doing that, I suspect for the latter for example some form of survey or signed letter for the latter would be a good step (e.g. "We believe the principles of Effective Altruism are inconsistent with defrauding people — signed by10,000 people who subscribe to the principles of Effective Altruism"), and I think there are other more substantive ideas here too. This has also left me thinking about ideas for how to do the former thing in my own spaces, including various whisteblower setups.

I think the thing that you're most missing is that I have not been pushing on EA marketing and EA growth for many years nor explicitly speaking on its behalf or as a representative of it, and a lot of what has been said by the marketing people has not reflected me or had my buy-in. For 4-5 years now this has increasingly been not my movement, especially in terms of growth and publicity. I still use a lot of the principles and respect some of the people involved, but the love is substantially gone. I specifically feel like the post is asking me to post on my social media (it talks about people with any 'public platform') and participate in propagating the 'collective beliefs' of the movement, as though I have been previously involved in saying the 'collective beliefs' in the direction of growth of EA on social media and in old-media or thought that it was a good idea, as opposed to thinking most of the public-facing marketing has been horrendous, costly, and net-negative. Over many years some other people went and tried to publicly say what 'we believe' which I found alienating and epistemically suspect, and now that a bunch of the moral respect has been burned, I'm being demanded to come in and take responsibility for propagating more of what 'we' stand for in those communication channels, with communication tools I was against in the first place. Like, insofar as I had spoken in this way and lent my word to it, then I think the post title would be far more reasonable, but I've almost entirely been against it and seen it done against my desires, and I have felt alienated. At this point I'm open to, and might do so, if asked respectfully and not in a way that implied I was already bought-in and not being a team player for not being bought-in, but I am strongly resistant to the implication that I am obligated to show up to propagate a collective belief because I do not endorse propagating these collective beliefs in general. I understand that it may looks like EA is inconsistent and shameful from the outside, but I am not responsible for the inconsistency in its public messaging, I was against advertising collective beliefs and have not been doing so.

I think it's correct for me to personally take some shoulder of blame for the bad consequences of EA, which I have been in a good trade relationship with (organizing retreats, building software for, etc), and I'm still thinking on what to do about that.

I also agree it's a time for moral reflection.

I agree it's important for the EA ecosystem to send a strong signal of saying that fraud is not permitted by EA principles. I think that if anyone wants me to participate in that signal, they ought to put in some hard work and find a route that does not sacrifice my epistemological principles in the doing of it, even if it is urgent to do so for the reputation of the ecosystem. The epistemology is just really not for the giving up. Yes, what Sam+Caroline did was probably horrendous (I am not 100% certain, more information may come to light). Yes, if so the EA ecosystem must clearly send costly signals that it does not endorse this behavior in order to continue to be respected as a moral entity. But I'm not okay with that method being a post whose title isn't trying to inform, but is instead saying words because of the coordination effects it hopes to have on people, and is (in a pretty key point in time) trying to move speech acts from the truth toward signaling.

P.S. It's one o'clock in the morning. I will later regret not reflecting more on this comment before posting, but I will also regret not posting anything at all because I am busy all of tomorrow and won't be able to reply then either, and I'd like to respond to this promptly. Which is to say, I may later on realize I don't quite endorse something or other I said here.

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2022-11-13T10:27:28.571Z · LW(p) · GW(p)

You make interesting points in return! And I have no strong disagreement with any of them. certainly I do think that none of us are SBF, but we are lower graph distance than many, and had some path proximity to algorithm choice he appears to have potentially used. seems like we're near the same page here.

comment by Slider · 2022-11-12T19:32:21.596Z · LW(p) · GW(p)

Somebody actually thought at some point "I am going to do crypto for effective altruism". I can understand wanting to have new and different kinds of tools. But having money extraction expectations kind of requires to treat the public adversially.

At this day and age rationalist adjacent, "especially good epistemics" parties are surprised that crypto is sus?

Please make all regular donators tell what kind of things they are doing to provide the money. There doesn't seem to be strong norms or memes about it, so establishing some or discovering similar "surprises" would make sense.

comment by ChristianKl · 2022-11-11T11:47:46.894Z · LW(p) · GW(p)

What exactly did FTX promise its customers? 

Normal banks do loan out their deposits, so just because they loaned out money that was deposited doesn't automatically mean that it's fraudulent. 

Furthermore, I will point out, if FTX did engage in fraud here, it was clearly in fact not a good idea in this case: I think the lasting consequences to EA—and the damage caused by FTX to all of their customers and employees—will likely outweigh the altruistic funding already provided by FTX to effective causes.

I don't think that argument would convince anyone who thinks about it, because they took a risk for such an outcome and did not expect this outcome to be the most likely outcome. 

Friedman said in one of his interviews that he thinks that paying 50 utility to get 110 utility with 50% probability and 50% 0 utility is something that one should always do. If every dollar would have the same value, betting 50 billion with a 50% of getting 110 billion and 50% chance to get zero, is something one should do according to his view of the world. This was partly in describing how he believes that it's worthwhile to take risks as an entrepreneur. 

He likely lost a risky bet he made. If what he was saying was true out likely outcomes, I think he said that he thought that there was a 5% likelihood of losing his fortune. In his likely overconfident view, he would expect that he thought what happened now was something like a 5% probability event. His model probably also had more than 5% probability of his plan producing >100 billion for EA causes (and no customers losing money). 

You can still argue that he shouldn't have made that bet, but arguing that he shouldn't have because this outcome is bad doesn't really address why he did what he did. 

Replies from: lc
comment by lc · 2022-11-11T11:49:07.803Z · LW(p) · GW(p)

They weren't a bank. They were an exchange.

Replies from: ChristianKl
comment by ChristianKl · 2022-11-11T11:53:44.407Z · LW(p) · GW(p)

Did they have a document where they described the promise they made about what they did?

I think how bad what they did happened to be depends a lot on the specific promises they made.

Replies from: kerkko-pelttari, lc, Evan R. Murphy
comment by Xylix (kerkko-pelttari) · 2022-11-11T13:04:59.051Z · LW(p) · GW(p)

Their terms of service had explicit clauses about using customer funds "as belonging to FTX trading". Unsure if it also applied to using customer funds as loan collateral but that would make sense.

(B) None of the Digital Assets in your Account are the property of, or shall or may be loaned to, FTX Trading; FTX Trading does not represent or treat Digital Assets in User’s Accounts as belonging to FTX Trading. 

(Terms of service PDF available here https://help.ftx.com/hc/en-us/articles/360024788391-FTX-Terms-of-Service )

Replies from: ChristianKl
comment by ChristianKl · 2022-11-11T13:27:40.943Z · LW(p) · GW(p)

Okay, in that case it seems they clearly broke them.

comment by lc · 2022-11-11T12:06:56.016Z · LW(p) · GW(p)

That an exchange doesn't go play poker on the stock market with client money is so far beyond a thing needing to be promised. It's like asking whether or not a garage had "made a promise" to you that they wouldn't total your car in a drag race over the weekend. SBF secretly and without customer knowledge used their funds to finance trades, which is theft.

comment by Evan R. Murphy · 2022-11-12T02:59:16.059Z · LW(p) · GW(p)

In addition to the terms of service issue Kerkko mentioned in a neighbouring comment, Will MacAskell claims in this thread on Twitter there was a "(later deleted) tweet from Sam claiming customer deposits are never invested."

comment by Viliam · 2022-11-12T18:36:08.360Z · LW(p) · GW(p)

By the way, I am blissfully ignorant of this entire topic, so I can just offer a kind of outside view: The proper time to be skeptical is when people are making extraordinary claims. Freaking out after their plans fail is a bit too late.

Saying "fraud is bad", while technically true, is not very helpful if it turns out that the thing was a scam only after you have already invested "life savings and careers" in it. Think about what you need to do differently the next time, before newspapers start reporting that something has collapsed.

comment by Wei Dai (Wei_Dai) · 2022-11-11T04:43:30.548Z · LW(p) · GW(p)

It was written by "former Head of Institutional Sales at @ftx_official" who still had access to FTX internal Slack until very recently. And you can check out his followers to verify that it's not an impersonation.

Replies from: lc
comment by lc · 2022-11-11T04:44:28.064Z · LW(p) · GW(p)

I completely and totally missed that, and had deleted the comment before you finished replying to it. LW devs should probably fix that stealth-delete-without-trace bug.

comment by Dzoldzaya · 2022-11-11T00:05:38.807Z · LW(p) · GW(p)

I'm not sure if it's better or worse that the longtermist funder is the target of oncoming hate. I think it means that the 'all those nice EAs stopping malaria and giving what they can' narrative should remain positive, while the 'crazy tech billionaire giving his money to stop an AI apocalypse' narrative will be more negative, but quite distinct from the former.

If it were the other way around, and SBF had been the global poverty guy, that probably would have spawned an interesting societal reflection about consequentialism vs deontology. As it is, longtermism/ AI is too niche for it to be generally seen as altruistic rather than a techie pet project to most people, though.