Noting an unsubstantiated communal belief about the FTX disaster

post by Yitz (yitz) · 2022-11-13T05:37:03.087Z · LW · GW · 52 comments

Contents

52 comments

52 comments

Comments sorted by top scores.

comment by DragonGod · 2022-11-13T11:07:37.377Z · LW(p) · GW(p)

I don't buy this argument for a few reasons:

  • SBF met Will MacAskill in 2013 and it was following that discussion that SBF decided to earn to give
    • EA wasn't a powerful or influential movement back in 2013, but quite a fringe cause.
  • SBF was in EA since his college days, long before his career in quantitative finance and later in crypto

 

SBF didn't latch onto EA after he acquired some measure of power or when EA was a force to be reckoned with, but pretty early on. He was in a sense "homegrown" within EA.

 

The "SBF was a sociopath using EA to launder his reputation" is just motivated credulity IMO. There is little evidence in favour of it. It's just something that sounds good to be true and absolves us of responsibility.

 

Astrid's hypothesis is very uncredible when you consider that she doesn't seem to be aware of SBF's history within EA. Like what's the angle here? There's nothing suggesting SBF planned to enter finance as a college student before MacAskill sold him on earning to give.

Replies from: JamesPayor
comment by James Payor (JamesPayor) · 2022-11-13T13:17:13.213Z · LW(p) · GW(p)

From what I've heard, SBF was controlling, and fucked over his initial (EA) investors as best he could without sabotaging his company, and fucked over parts of the Alameda founding team that wouldn't submit to him. This isn't very "EA" by the usual lights.

SBF seems to have successfully come across as a much friendlier and trustworthy player than he actually is, in large part thanks to EA, and a propensity to be thankful for another large funder showing up.

Replies from: Ansel, Linch, ChristianKl, M. Y. Zuo
comment by Ansel · 2022-11-13T16:41:28.011Z · LW(p) · GW(p)

From what I've heard, SBF was controlling, and fucked over his initial (EA) investors as best he could without sabotaging his company, and fucked over parts of the Alameda founding team that wouldn't submit to him. This isn't very "EA" by the usual lights.

 

It's not immediately clear to me that this isn't a No True Scotsman fallacy.

Replies from: JamesPayor
comment by James Payor (JamesPayor) · 2022-11-14T04:53:25.246Z · LW(p) · GW(p)

You may draw what conclusions you like! It's not my intention to defend EA here.

Here's an attempt to clarify my outlook, though my words might not succeed:

To the extent EA builds up idealized molds to shove people into to extract value from them, this is fucked up. To the extent that EA then pretends people like Sam or others in power fit the same mold, this is extra fucked up. Both these things look to me to me rampant in EA. I don't like it.

Replies from: Ansel
comment by Ansel · 2022-11-14T15:37:12.427Z · LW(p) · GW(p)

That does clarify where you're coming from. I made my comment because it seems to me that it would be a shame for people to fall into one of the more obvious attractors for reasoning within EA about the SBF situation. 
E.G., an attractor labelled something like "SBF's actions were not part of EA because EA doesn't do those Bad Things".

Which is basically on the greatest hits list for how (not necessarily centrally unified) groups of humans have defended themselves from losing cohesion over the actions of a subset anytime in recorded history. Some portion of the reasoning on SBF in the past week looks motivated in service of the above.

The following isn't really pointed at you, just my thoughts on the situation.

I think that there's nearly unavoidable tension with trying to float arguments that deal with the optics of SBF's connection to EA, from within EA. Which is a thing that is explicitly happening in this thread. Standards of epistemic honesty are in conflict with the group need to hold together. While the truth of the matter is and may remain uncertain, if SBF's fraud was motivated wholly or in part by EA principles, that connection should be taken seriously.

 

My personal opinion is that, the more I think about it, the more obvious it seems that several cultural features of LW adjacent EA are really ideal for generating extremist behavior. People are forming consensus thought groups around moral calculations that explicitly marginalize the value of all living people, to say nothing of the extreme side of negative consequentialism. This is all in an overall environment of iconoclasm and disregarding established norms in favor of taking new ideas to their logical conclusion.
 
These are being held in an equilibrium by stabilizing norms. At the risk of stating the obvious, insofar as the group in question is a group at all, it is heterogeneous; the cultural features I'm talking about are also some of the unique positive values of EA. But these memes have sharp edges.

comment by Linch · 2022-11-14T18:14:19.017Z · LW(p) · GW(p)

From what I've heard, SBF was controlling, and fucked over his initial (EA) investors as best he could without sabotaging his company, and fucked over parts of the Alameda founding team that wouldn't submit to him.

Woah, I did not hear about this despite trying nontrivially hard to figure out what happened when I was considering whether to take a job there in mid-late 2019 (and also did not hear about it afterwards). I think I would've made pretty different decisions both then and afterwards if I had the correct impression.

Specifically, I knew about the management team leaving in early 2018 (and I guess "fucked over" framing was within my distribution but I didn't know the details). I did not in any way know about fucking over the investors.

comment by ChristianKl · 2022-11-13T14:59:46.371Z · LW(p) · GW(p)

From what I've heard, SBF was controlling, and fucked over his initial (EA) investors as best he could without sabotaging his company, and fucked over parts of the Alameda founding team that wouldn't submit to him.

Link?

Replies from: JamesPayor
comment by James Payor (JamesPayor) · 2022-11-14T03:41:44.344Z · LW(p) · GW(p)

I'm drawing on multiple (edit: 3-5) accounts from people I know who were involved at the time, and chose to leave. I don't think much is written up yet, and I hope that changes soon.

comment by M. Y. Zuo · 2022-11-14T22:49:45.610Z · LW(p) · GW(p)

If true, definitely makes him seem like an unpleasant character on the inside. 

In any case the folks over in the EA leadership really should have done some more due diligence before getting intermeshed. The management team leaving in 2018 should already have been a really strong signal, and ignoring that is the sign of amateurs.

comment by lc · 2022-11-13T13:41:18.165Z · LW(p) · GW(p)

The default hypothesis should be that, while his EA ambitions may have been real, SBF's impetus to steal from his users had little to nothing to do with EA and everything to do with him and his close associates retaining their status as rich successful startup founders. Sam & crew were clearly enjoying immense prestige derived from their fame and fortune, even if none of them owned a yacht. When people in that position prop Alameda up with billions of dollars of user funds rather than give up those privileges, I think the reasonable assumption is that they're doing it to protect that status, not save the lightcone. I find it highly odd that no one has mentioned this as a plausible explanation.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2022-11-13T15:38:36.172Z · LW(p) · GW(p)

I'm not sure why that should be the default hypothesis. Do you have specific information about them in particular or is that based on general psychology? Power corrupts is a common saying but how strong is the effect really? I'd like to see more evidence of that.

Replies from: yair-halberstadt
comment by Yair Halberstadt (yair-halberstadt) · 2022-11-13T16:17:17.252Z · LW(p) · GW(p)

When someone in a position where they stand to lose a lot commits fraud to stop that happening, the default assumption is they did it to save their own skin, not for any higher motives. Or never ascribe to ideals what can be ascribed to selfishness.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2022-11-13T16:59:01.345Z · LW(p) · GW(p)

It depends on when the "stealing" began. I haven't followed the thing closely enough to know. Banks reinvest funds too - it's just more regulated.

comment by Lukas_Gloor · 2022-11-13T10:31:07.148Z · LW(p) · GW(p)

Sam has engaged with EA ideas early on and shown a deep understanding and even obsession with them long before it would have given him massive benefits to associate with EA. So, I think your point is almost certainly false, but it could've been true in a similar situation, and that's really important to be aware of. 

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2022-11-13T11:05:20.427Z · LW(p) · GW(p)

I don't think this changes anything. It's still possible for someone with EA motivations to have dark triad traits, so I wouldn't say "he was motivated by EA principles" implies that the same thing could've happened to almost anyone with EA principles. (What probably could've happened to more EAs is being complicit in the inner circle as lieutenants.)

"Feeling good about being a hero" is a motivation that people with dark triad traits can have just like anyone else. (The same goes for being deeply interested and obsessed with certain intellectual pursuits, like moral philosophy or applying utilitarianism to your life.) Let's assume someone has a dark triad personality. I model people like that as the same as a more neurotypical person except that they: 

  • Feel the same way I feel about people I find annoying and unsympathetic about 99.9-100% of people.
  • Don't have any system-1 fear of bad consequences. Don't have any worries related to things like guilt or shame (or maybe do have issues around shame but it expresses itself more in externalizing negative emotions like jealousy, spite).
  • Find it uncannily easy to move on from close relationships or shut empathy on and off at will as circumstances change regarding what's advantageous for them (if they ever form closer connections in the first place).

There are more factors that are different, but with some of the factors you wonder if they're just consequences of the above. For instance, being power-hungry: if you can't find meaning in close relationships, what else is there to do? Or habitual lying: if you find nearly everyone unsympathetic and annoying and you don't experience the emotion of guilt, you probably find it easier (and more pleasant) to lie.

In short, I think people with dark triad traits lack a bunch of prosocial system-1 stuff, but they can totally aim to pursue system-2 goals like "wanting to be a hero" like anyone else. 

(Maybe this is obvious, but sometimes I hear people say "I can't imagine that he isn't serious about EA" as though it makes other things about someone's character impossible, which is not true.) 

Replies from: interstice, lc
comment by interstice · 2022-11-13T16:23:35.279Z · LW(p) · GW(p)

SBF had sociopathic personality traits and was clearly motivated by EA principles. If you look at people who commit heinous acts in the name of just about any ideology, they will likely have sociopathic personality traits, but some ideologies can make it easier to justify taking sociopathic actions(and acquire resources/followers to do so).

comment by lc · 2022-11-13T13:22:48.008Z · LW(p) · GW(p)

Who are you replying to?

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2022-11-13T13:25:08.205Z · LW(p) · GW(p)

Double-posted as an after thought and kept comments separate because they say separate things (so people can vote separately). 

The type of view "I don't think this changes anything" in the second comment is proactively replying to is this one: 

(Maybe this is obvious, but sometimes I hear people say "I can't imagine that he isn't serious about EA" as though it makes other things about someone's character impossible, which is not true.) 

comment by Rafael Harth (sil-ver) · 2022-11-13T07:38:01.801Z · LW(p) · GW(p)

I'd like to submit SBF being vegan as strong Bayesian Evidence that this narrative is, in fact, entirely correct. (Source: Wikipedia.)

For me, having listened to the guy talk is even stronger evidence since I think I'd notice it if he was lying, but that's obviously not verifiable.

Replies from: yitz, lc, tailcalled
comment by Yitz (yitz) · 2022-11-13T07:59:40.326Z · LW(p) · GW(p)

For me, having listened to the guy talk is even stronger evidence since I think I'd notice it if he was lying, but that's obviously not verifiable.

Going to quote from Astrid Wilde here (original source linked in post):

i felt this way about someone once too. in 2015 that person kidnapped me, trafficked me, and blackmailed me out of my life savings at the time of ~$45,000. i spent the next 3 years homeless.

sociopathic charisma is something i never would have believed in if i hadn't experienced it first hand. but there really are people out there who spend their entire lives honing their social intelligence to gain wealth, power, and status.

most of them just don't have enough smart but naive people around them to fake competency and reputation launder at scale. EA was the perfect political philosophy and community for this to scale....

I would really very strongly recommend not updating on an intuitive feeling of "I can trust this guy," considering that in the counterfactual case (where you could not in fact, trust the guy), you would be equally likely to have that exact feeling!

As for SBF being vegan as evidence, see my reply to you on the EA forum. [EA(p) · GW(p)]

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2022-11-13T09:25:32.531Z · LW(p) · GW(p)

I would really very strongly recommend not updating on an intuitive feeling of "I can trust this guy," considering that in the counterfactual case (where you could not in fact, trust the guy), you would be equally likely to have that exact feeling!

Fictional support:

Romana: You mean you didn't believe his story?

The Doctor: No.

Romana: But he had such an honest face.

The Doctor: Romana, you can't be a successful crook with a dishonest face, can you?

Doctor Who [LW(p) · GW(p)]

comment by lc · 2022-11-13T08:16:57.171Z · LW(p) · GW(p)

How do you know he is vegan? A sociopath would have no problem eating vegan in public and privately eating meat in order to keep a narrative.

Replies from: Gunnar_Zarncke, sil-ver
comment by Gunnar_Zarncke · 2022-11-13T15:46:17.317Z · LW(p) · GW(p)

Early EA was not a productive environment for sociopaths or conmen. I don't buy that story. Faking vegan for example will be hard over such long time with low expected reward. I think a more plausible story is that he changed. Many people change over time esp. if their peer group changes or if they acquire power.

comment by Rafael Harth (sil-ver) · 2022-11-13T08:27:33.082Z · LW(p) · GW(p)

Possible, but adds additional complexity to the competing explanation.

Replies from: tailcalled
comment by tailcalled · 2022-11-13T11:42:05.292Z · LW(p) · GW(p)

I don't think "He was pretending to be vegan" adds any more complexity to the "He was a conman" explanation than "He was genuinely a vegan" adds to the "He was a naive/cartoon-villain utilitarian" explanation?

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2022-11-14T11:37:22.638Z · LW(p) · GW(p)

Huh, didn't expect the different intuitions here (yay disagreement voting!). I do think pretending to be vegan adds substantial complexity; making such a big lifestyle adjustment for questionable benefit is implausible in my model. But I may just not have a good theory of mind for "sociapaths" as lc puts it.

Replies from: tailcalled
comment by tailcalled · 2022-11-14T11:47:51.833Z · LW(p) · GW(p)

I do agree that it adds complexity. But so does "He was actually a vegan". Of course the "He was actually a vegan" complexity is paid for in evidence of him endorsing veganism and never being seen eating meat. But this evidence also pays for the complexity of adding "He was pretending to be a vegan" to the "He was thoroughly a conman" hypothesis.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2022-11-14T15:17:30.784Z · LW(p) · GW(p)

But so does "He was actually a vegan"

But not a lot since highly idealistic people tend to be vegan. I think

Replies from: tailcalled
comment by tailcalled · 2022-11-14T15:23:38.417Z · LW(p) · GW(p)

But didn't he project a highly idealistic image in general? Committing to donating to charity, giving off a luxury-avoiding vibe, etc.. This gives evidence to narrow the conman hypothesis down from common conmen to conmen who pretend to be highly idealistic. And I'm not sure P(vegan|highly idealistic) exceeds P(claims to be vegan|conman who pretends to be highly idealistic).

comment by tailcalled · 2022-11-13T11:20:47.855Z · LW(p) · GW(p)

I once saw a picture on twitter claiming to disprove him being vegan, by showing him standing in front of his fridge where there were eggs visible in the fridge in the background. The veganism might be a lie.

Edit: here it is: https://twitter.com/SilverBulletBTC/status/1591403692246589444/photo/1

Replies from: mr-hire, ChristianKl, yitz
comment by Matt Goldenberg (mr-hire) · 2022-11-13T13:48:17.928Z · LW(p) · GW(p)

He lives in an apartment with multiple roommates.  Pretty obvious explanation when there's multiple different egg cartons and JUST egg in there.

Replies from: yitz
comment by Yitz (yitz) · 2022-11-13T20:17:32.808Z · LW(p) · GW(p)

oh that makes sense lol

comment by ChristianKl · 2022-11-13T23:28:09.917Z · LW(p) · GW(p)

In the 80000 hours interview, Wiblin asks him about the vegan leafletting at university. That's more commitment to veganism than the average vegan.

comment by Yitz (yitz) · 2022-11-13T11:32:35.248Z · LW(p) · GW(p)

I'm pretty sure the tweet I saw was something similar to this. Would be happy to have this disproven as a hoax or something of course...

comment by prudence · 2022-11-13T10:21:53.564Z · LW(p) · GW(p)

Thank you for writing this much-needed piece. EA can be quick to self-flagellation under the best of circumstances. And this is not the best of circumstances.

comment by trevor (TrevorWiesinger) · 2022-11-13T13:40:25.894Z · LW(p) · GW(p)

It seems like we're all getting distracted from the main point here. It doesn't even matter whether SBF did it, let alone why. What matters is what this says about the kind of world we live in, for the last 20 years, and, now, for the last 7 days:

I strongly suspect[4] [LW(p) · GW(p)] that in ten years from now, conventional wisdom will hold the above belief as being basically cannon, regardless of further evidence in either direction. This is because it presents an intrinsically interesting, almost Hollywood villain-esque narrative, one that will surely evoke endless "hot takes" which journalists, bloggers, etc. will have a hard time passing over. Expect this to become the default understanding of what happened (from outsiders at least), and prepare accordingly.

The fact that Lesswrong is vulnerable to this, let alone EA, is deeply disturbing. Smart people are supposed to automatically coordinate around this sort of thing, because that's what agents do, and that's not what's happening right now. This is basically a Quirrell moment in real life; a massive proportion of people on LW are deferring their entire worldview to obvious supervillains.

Replies from: AspiringRationalist, lc
comment by NoSignalNoNoise (AspiringRationalist) · 2022-11-13T17:15:33.919Z · LW(p) · GW(p)

This is basically a Quirrell moment in real life; a massive proportion of people on LW are deferring their entire worldview to obvious supervillains.

Who are the obvious supervillains that they're deferring their entire worldview to? And who's deferring to them?

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2022-11-14T05:37:14.791Z · LW(p) · GW(p)

This comment had negative karma when I looked at it. I don't think we as a community should be punishing asking honest questions, so I strong-upvoted this comment.

comment by lc · 2022-11-13T13:59:54.957Z · LW(p) · GW(p)

He's not saying LessWrong is vulnerable to it, he's saying it's just what people outside of LessWrong are going to believe. He's explicitly mentioning it so as to not necessarily take it at face value.

Replies from: yitz
comment by Yitz (yitz) · 2022-11-13T16:19:34.138Z · LW(p) · GW(p)

You are correct in that I was not explicitly saying that LessWrong is vulnerable to this (except for the fact that this assumption hasn't really been pushed back on until nowish), but to be honest I do expect some percentage of LessWrong folks to end up believing this regardless of evidence. That's not really a critique against the community as a whole though, because in any group, no matter how forward-thinking, you'll find people who don't adjust much based on evidence contrary to their beliefs.

comment by ChristianKl · 2022-11-13T15:07:00.520Z · LW(p) · GW(p)

Sam Bankman Fried did what he did primarily for the sake of "Effective Altruism," as he understood it. Even though from a purely utilitarian perspective his actions were negative in expectation, he justified the fraud to himself because it was "for the greater good." As such, poor messaging on our part[2] [LW(p) · GW(p)] may be partially at fault for his downfall.

Without knowing his calculation it's hard to know whether or not his actions were negative or positive in expectation given his values. 

If you believe that each future person is as valuable as each present person and there will be 10^100 people in the future lightcone, the amount of people that were hurt by FTX blowing up is a rounding error.

In his 80000 hours interview Sam Bankman-Fried, talks about how he thinks taking a high-risk high-upside approach is very valuable. Almeda investing billions of dollars of FTX customers' money is a high-upside bet. 

Being at this point certain, that his actions were negative in expectation looks to me like highly motivated reasoning by people who don't like to look at the ethics underlying effective altruism. They are neither willing to say that maybe Sam Bankman-Fried did things right nor willing to criticize the underlying ethical assumptions.

His 80000 interview suggests that he thought the chance of FTX blowing up is something between 1% and 10%. There he gives 50% odds for making more than 50 billion dollars that can be donated to EA causes. 

If someone is saying that his action was negative in expectation, do they mean, that Sam Bankman-Fried lied about his expectations? Do they mean that a 10% chance of this happening should have been enough to tilt the expectation to be negative under the ethical assumptions of longtermism that puts most of the utility that's produced in the far future? Are you saying something else?

Replies from: AspiringRationalist, Henry Prowbell
comment by NoSignalNoNoise (AspiringRationalist) · 2022-11-13T17:27:38.531Z · LW(p) · GW(p)

His 80000 interview suggests that he thought the chance of FTX blowing up is something between 1% and 10%. There he gives 50% odds for making more than 50 billion dollars that can be donated to EA causes.

If someone is saying that his action was negative in expectation, do they mean, that Sam Bankman-Fried lied about his expectations? Do they mean that a 10% chance of this happening should have been enough to tilt the expectation to be negative under the ethical assumptions of longtermism that puts most of the utility that's produced in the far future? Are you saying something else?

I wish I had any sort of trustworthy stats about the success rate of things in the reference class of steal from one pool of money in order to cover up losses in another pool of money, in the hope of making (and winning) big bets in the second pool of money to eventually make the first pool of money whole. I would expect the success rate to be very low (I would be extremely surprised if it were as high as 10%, somewhat surprised if it were as high as 1%), but it's also the sort of thing where if you do it successfully, probably nobody finds out.

Do Ponzi schemes ever become solvent again? What about insolvent businesses that are hiding their insolvency?

Replies from: ChristianKl
comment by ChristianKl · 2022-11-13T18:59:08.411Z · LW(p) · GW(p)

Zombie banks would be one type of organization in that reference class. 

comment by Henry Prowbell · 2022-11-14T20:51:49.079Z · LW(p) · GW(p)

If you believe that each future person is as valuable as each present person and there will be 10^100 people in the future lightcone, the amount of people that were hurt by FTX blowing up is a rounding error.

 

But you have to count the effect of the indirect harms on the future lightcone too. There's a longtermist argument that SBF's (alleged and currently very likely) crimes plausibly did more harm than all the wars and pandemics in history if...

  • Governments are now 10% less likely to cooperate with EAs on AI safety
  • The next 2 EA mega-donors decide to pass on EA
  • (Had he not been caught:) The EA movement drifted towards fraud and corruption
  • etc.
Replies from: ChristianKl
comment by ChristianKl · 2022-11-14T22:54:48.421Z · LW(p) · GW(p)

You are however only counting one side here. SBF appearing successful was a motivating example for others to start projects that would have made them Mega donors.

Governments are now 10% less likely to cooperate with EAs on AI safety

I don't think that's likely to be the case. 

The next 2 EA mega-donors decide to pass on EA

There's an unclearness here about what "pass on EA means". Zvi wrote about Survival and Flourishing Fund not being an EA fund.

How to model all the related factors is complicated. Saying that you easily know the right answer to whether the effects are negative or positive in expectation without running any numbers seems to me unjustified. 

Replies from: Henry Prowbell
comment by Henry Prowbell · 2022-11-15T11:22:16.579Z · LW(p) · GW(p)

You are however only counting one side here

 

In that comment I was only offering plausible counter-arguments to "the amount of people that were hurt by FTX blowing up is a rounding error."

How to model all the related factors is complicated. Saying that you easily know the right answer to whether the effects are negative or positive in expectation without running any numbers seems to me unjustified. 

I think we basically agree here.

I'm in favour of more complicated models that include more indirect effects, not less.

Maybe the difference is: I think in the long run (over decades, including the actions of many EAs as influential as SBF) an EA movement that has strong norms against lying, corruption and fraud actually ends up more likely to save the world, even if it gets less funding in the short term. 

The fact that I can't predict and quantify ahead of time all the possible harms that result from fraud doesn't convince me that those concerns are unjustified.

We might be living in a world where SBF stealing money and giving $50B to longtermist causes very quickly really is our best shot at preventing AI disaster, but I doubt it. 

Apart from anything else I don't think money is necessarily the most important bottleneck.

Replies from: ChristianKl
comment by ChristianKl · 2022-11-15T16:36:11.046Z · LW(p) · GW(p)

We already have an EA movement where the leading organization has no problem editing out elements of a picture it publishes on its website because of possible PR risks. While you can argue that it's not literally lying it comes very close and suggests the kind of environment that does not have the strong norms that would be desirable  

I don't think FTX/Almeda doing this in secret strongly damaged general norms against lying, corruption, and fraud.

Them blowing up like this actually is a chance for moving toward those norms. It's a chance to actually look into ethics in a different way to make it more clear that being honest and transparent is good. 

Saying "poor messaging on our part" which resulted in "actions were negative in expectation in a purely utilitarian perspective" is a way to avoid having the actual conversation about the ethical norms that might produce change toward stronger norms for truth. 

comment by Shmi (shminux) · 2022-11-13T07:09:19.358Z · LW(p) · GW(p)

I am still super confused why there was apparently no due diligence by the EA leadership, assuming there is such a thing. At least Enron and MtGoX had no one to oversee them. Are they just that gullible? (Also see my question [LW · GW] about most places not having a single person responsible for risk assessment and mitigation.) I would assume that rationality spinoffs would pay attention to Bayes and probabilities.

Replies from: AspiringRationalist, ChristianKl, yitz
comment by NoSignalNoNoise (AspiringRationalist) · 2022-11-13T17:19:59.356Z · LW(p) · GW(p)

I think approximately no one audits people's books before accepting money from them. It's one thing to refuse to accept money from a known criminal (or other type of undesirable), but if you insist that the people giving you money prove that they obtained it honestly, then they'll simply give that money to someone else instead.

comment by ChristianKl · 2022-11-13T14:22:23.182Z · LW(p) · GW(p)

I am still super confused why there was apparently no due diligence by the EA leadership, assuming there is such a thing. At least Enron and MtGoX had no one to oversee them.

Enron was overseen by Arthur Andersen and audited by them.

EA leadership did not have a good way to audit FTX to find out that they have owned user funds to Almeda. 

Arthur Andersen, who's a team of professional analysts who actually has access to the books, seems to me a lot more guilty for failing oversight than people at CEA or other EA orgs. 

comment by Yitz (yitz) · 2022-11-13T07:31:14.449Z · LW(p) · GW(p)

One would think—unfortunately we humans are really bad at judging our own ability to judge the trustworthiness of other people, even when we know about said bais. If you hire a friend or trusted community leader to do a high-stakes job, many people won't even bother with an NDA, let alone do any formal investigation into their honesty! Hopefully this will serve as a lesson that won't have to be repeated...

comment by the gears to ascension (lahwran) · 2022-11-13T05:51:39.380Z · LW(p) · GW(p)

it is a problem with the algorithms that implemented the attention. it's not the messaging, but rather the interaction patterns, that embed the mistake that both encouraged trusting him and which encouraged him to see it as a good place to be trusted. he did actually donate a bunch of money to altruistic causes while fucking up the ev calculation; he may have been fooling himself, but it is usually the case (correlation) that the behaviors one sees in an environment are the behaviors the environment causes, even if you're wrong about which part of the environment is doing the causing. because correlation isn't inherently causation this heuristic does sometimes fail; it's more reliable than most correlations-being-causations because environments do have a lot of influence over possibility. if the true path was that he manipulated EAs, then that's an error EA needs to repair and publicly communicate by nature of being introspectable by other human beings; if instead it was because EA actually encouraged this de novo rather than being infectable by it, then that is slightly worse, but ultimately still has a solution that looks like figuring out how to build immunity so such misbehavior can be reliably trusted to not happen again. building error-behavior immunity is a difficult task, especially because it can cause erroneous immune matches if people blame the wrong part of the misbehavior.

the alignment problem was always about inter-agent behavior.