Book Review: Going Infinite

post by Zvi · 2023-10-24T15:00:02.251Z · LW · GW · 110 comments

Contents

  Who Was This Guy?
  Where Was This Guy?
  Who Was This Guy as a Kid?
  Why Was That Guy So Misaligned?
  Will All of This Happen Again?
  Behold the Power of Yup
  Jane Street Capital
  Soiling the Good Name of Alameda County
  How Any of This Worked
  Building FTX
  The Sam Show
  A Few Good Trades
  The Plan
  The Tragedy of FTT
  The Vanishing
  The Reckoning
  John Ray, the Hero We Need
  Caroline Ellison
  New and Old EA Cause Areas
  Won’t Get Fooled Again
  Conclusion
None
110 comments

Previously: Sadly, FTX

I doubted whether it would be a good use of time to read Michael Lewis’s new book Going Infinite about Sam Bankman-Fried (hereafter SBF or Sam). What would I learn that I did not already know? Was Michael Lewis so far in the tank of SBF that the book was filled with nonsense and not to be trusted?

I set up a prediction market, which somehow attracted over a hundred traders. Opinions were mixed. That, combined with Matt Levine clearly reporting having fun, felt good enough to give the book a try.

I need not have worried.

Going Infinite is awesome. I would have been happy with my decision on the basis of any one of the following:

The details I learned or clarified about the psychology of SBF in particular.

The details I learned or clarified about the psychology of Effective Altruism.

The details about all the crimes and other things that happened.

The sheer joy of reading, because man can Michael Lewis write.

I also get to write this post, an attempt to quickly share what I’ve extracted, including some of the sheer joy. We need more joy, now more than ever.

There are three problems with Going Infinite.

Michael Lewis fails to put two and two together regarding: Who is this guy?

Michael Lewis fails to figure out that obviously this man was constantly lying and also did all of the crimes.

Michael Lewis omits or fails to notice key facts and considerations.

I do think all of these are genuine mistakes. He (still) is in the tank because character is fate and we are who we choose to be. Michael Lewis roots for the wicked smart, impossibly hard working, deeply obsessed protagonist taking on the system saying that everyone else is an idiot, that has unique insight into and will change the world. It all makes too much sense, far too much for him to check.

What Michael Lewis is not is for sale. Or at least, not for cheap. I do not think anyone paid him. Like all worthy protagonists, including those he looks to cover, Michael Lewis has a code. In this case, the code did him wrong. It happens.

Then at the trial it turns out, among many other things, your hero selected from seven balance sheet variations, and he gave their hedge fund the faster trade execution he kept swearing he didn’t give them, and the insurance fund he kept talking about was, in its entirely, a literal call to a random number generator.

Let’s have some fun, rant a bunch, and also explain it all. I’d like to solve the puzzle.

While also pointing out the puzzles that remain unsolved.

[Note: Unattributed quotes here are from the book. The number refers to the Kindle location of the quote. The ability to easily do this is why I read such books on Kindle.]

Who Was This Guy?

By the end of this walk I was totally sold. I called my friend and said something like: Go for it! Swap shares with Sam Bankman-Fried! Do whatever he wants to do! What could possibly go wrong? It was only later that I realized I hadn’t even begun to answer his original question: Who was this guy? (70)

That’s the central mystery of the book. It’s not the money. It’s SBF. Who was this guy?

The book solves this mystery, despite Lewis not noticing he has done so.

This is a very raw-G ‘smart’ person, who manufactured an entirely artificial superficial charm, has grandiose self-worth, pathologically lies, is endlessly manipulative, lacks remorse or guilt, has extreme emotional shallowness, fails to accept responsibility for anything ever, needs stimulation constantly to the point of constantly fidgeting, never sleeping and playing video games during television appearances, is constantly impulsive and irritable and irresponsible, has goals like going infinite and mostly does things without any plan or vision at all, did all the crimes and the first opportunity he got had his bail revoked, although Lewis seems to be in denial about the crimes and the bail got revoked after the book’s events.

There are two other things on the list I’m drawing from there, but I think we get the point? This should not be a hard type to recognize once we have everything in front of us. It also is presumably not an especially new personality type to the author of The Big Short, Flash Boys and Liar’s Poker. I mean, come on.

Nor is this a type of person who we could consider might not be committing fraud if you put him in charge of a crypto exchange. There would not even be a distinction in their head between ‘fraud’ and ‘not fraud,’ between ‘I tell truth’ and ‘I tell lie’ or between ‘customer money’ and ‘money.’

To them, there are only actions and (some of their) consequences. If the customer asks for their money and you don’t have it, or people find out you don’t have the money, or that you said you had the money and you didn’t (or that you took the money), people might get mad. They might demand their money back. Don’t let that happen. That would be bad. But also don’t worry about it.

Is that ‘fraud sort of happened?’ It is ‘super ultra fraud from day one?’ Yes.

What about the Effective Altruism and the Benthamite Utilitarianism? Was that for real? Yes, in an abstract intellectual way. Number go up. There needs to be McGuffin. A utility function. A justification for everything. This provided one.

If SBF was not so impulsive and impatient, we would not be able to tell. This is the Orthogonality Thesis and Instrumental Convergence. A proper SBF, with an actual linear utility function and expected impact curve, with any sane discount rate, would not be in the business of shoveling money out the window well before he’d maxed out his ability to provide himself with operating capital and guard against downside risks.

Instead, he would have done the amount necessary to convince most people he was sincere, as this would serve his purposes. Optimal fully fake SBF would be vegan and drive a Toyota Corolla. This was different. Next level. Could he fool me this way?

Sure, if he wanted to. But I believe him exactly because there would be no value in investing this much in order to fool me. We see him shoveling tons of money out the door, well in advance of any reasonable pace for doing so and in deeply irresponsible fashion, often in ways that plausibly make things worse while also putting him at risk.

That does not mean his positions on any of this were coherent, or optimized, or made any sense, or was a good thing, or anything like that. That if he had made his utilitarian Number Go Up that I or you would have liked that world, or that his play was +EV even by his own metrics. It also does not mean that his motivations would have survived as he gained further wealth and power. It does mean that I buy that he wanted to make the utilitarian Number Go Up as he saw it, up until the end.

In particular, this caught my eye:

In 2018, trading $40 million in capital, Alameda Research had generated $30 million in profits. Their effective altruist investors took half, leaving behind $15 million. Five million of that was lost to payroll and severance for the departing crowd; another $5 was lost to expenses. On the remaining $5 million they’d paid taxes, and so, after all was said and done, they’d donated to effective altruist causes just $1.5 million. (1,841)

At this time, they were borrowing at 50% (!) interest in order to trade, were very much in danger of going broke, very much liquidity constrained, and the claim is they donated much or all of their yearly profits. This is a completely crazy thing to do, in a way that could not possibly have had sufficient signaling value to compensate for it, especially since it also sends other highly negative signals.

So I am inclined to believe him.

Also, that is not how any of this works. That is not what profits mean. Your expenses count. Your payroll counts. This is absurd.

Oh, and by the way: This is their 2018 deck, the same year, which claims >100% consistent annualized returns (and ‘no risk’).

So, yes. A fraud from the start.

What about the Vox interview with Kelsey Piper? Didn’t SBF admit to having no ethics, to it all being a lie? Well, kind of. He admitted that he plays lots of stupid signaling games and pretends to care about various issues including woke ones, and that he treats all that with contempt. He showed he cares not one bit for ethics, that reputation is only instrumental to him.

But all of that is totally compatible with being a true believer in EA and Benthamite Utilitarianism. In a post-book interview, Lewis calls the interview an aberration. But it wasn’t an aberration. It was peak Sam all the way.

How did he get to be that way? We’ll return to that at the end of the story.

Where Was This Guy?

This proved a bigger question than one might expect, well before SBF had any reason to be on the run. According to Lewis, SBF tasked a woman named Natalie, with no relevant prior experience in such matters, to manage all of his logistics and scheduling and also PR. And then SBF systematically ignored her advice, caused constant shitshows, and failed to tell her even where he was going to be or whether he intended to keep any of his prior commitments.

Luckily, she was a quick study.

She could never be sure where he was, for a start. “Don’t expect that he’ll tell you where he’s going to be at when,” said Natalie. “He’ll never tell you. You need to be smart and fast to find out by yourself.” And Sam might be anywhere, at any hour. She’d book him a room for two nights at the Four Seasons in Washington, DC, and Sam might even check in, but never enter the room. (174)

There had been nights Natalie had gone to bed at 3:00 a.m., set an alarm for 7:00, woken up to see what public relations shitstorm Sam might have caused in the interim, set a second alarm for 8:00, checked again, then set another alarm and fallen back to sleep until 9:30. (180)

She learned to humor the professor at Harvard, for example, by saying, “Yes, Sam told me he agreed to come and speak to a room full of important Harvard people at two next Friday. It’s on his schedule.” Yet even as she uttered those words, she’d already invented the excuse she’d make to that same Harvard person, likely next Thursday night, to explain why Sam would be nowhere near Massachusetts. Sam has Covid. The prime minister needed to see Sam. Sam is stuck in Kazakhstan. (195)

Why? Because Sam did not care about keeping his word or commitments. At all. Not unless he could point to concrete negative consequences of not doing so, which mostly did not much bother him. He cared about what he felt like doing, what seemed worth doing.

They didn’t know that inside Sam’s mind was a dial, with zero on one end and one hundred on the other. All he had done, when he said yes, was to assign some non-zero probability to the proposed use of his time. The dial would swing wildly as he calculated and recalculated the expected value of each commitment, right up until the moment he honored it or didn’t. (187)

The funny thing about these situations was that Sam never really meant to cause them, which in a way made them feel even more insulting. He didn’t mean to be rude. He didn’t mean to create chaos in other people’s lives. He was just moving through the world in the only way he knew how. The cost this implied for others simply never entered his calculations. With him it was never personal. If he stood you up, it was never on a whim, or the result of thoughtlessness. It was because he’d done some math in his head that proved that you weren’t worth the time. (199)

It required him to estimate probabilities, but also to guess. This was important; Sam didn’t care for games, like chess, where the players controlled everything and the best move was in theory perfectly calculable. (250)

[Gamer’s note: Sam did love bughouse, which is 4-player chess played on two boards. In theory it is indeed solvable, in practice there are enough variables you have to wing it. But this emphasizes that what matters to Sam is almost certainly that something is probabilistic in practice, not in theory.]

This was less ‘calculated’ than ‘some math’ by which we mean something between Fermi estimate, motivated five second approximation and ass pull. You can throw together numbers that justify whatever, if that is what you are inclined to do. Somehow Lewis thinks that ‘done some math in his head’ does not represent ‘a whim’ or ‘thoughtlessness.’ Yes, Sam thought he had something better to do with his time, he didn’t feel like doing what he said he would do. Call that exactly what it always is.

It gets easier if you give exactly zero consideration to the costs you impose on others, or how they might react, or the mess others will need to clean up, or any ethical considerations, or any second-order or other considerations he didn’t notice when giving this a few seconds of thought. Sam would likely object this is not quite right, he does consider the cost to the person inconvenienced, but doesn’t care more about the person he’s inconveniencing than he would to people halfway across the world, so the size is trivial and who cares really? Think of all the good Sam can do by leaving you alone at lunch.

It also gets easier if you decide to treat your utility of money as linear, despite this being completely Obvious Nonsense on so many levels, such as whether not having enough money might suddenly be a real problem that meant the music would stop, and many clear acknowledgements that he had zero idea how to efficiently deploy even the money he already had.

Who Was This Guy as a Kid?

Books like this ask such questions. People think it matters. So here are some quotes?

The trip to the amusement park was a good example. When Sam was a small child, his mother had located a Six Flags or Great America park. She’d hauled him dutifully from amusement to amusement until she realized Sam wasn’t amused. Instead of throwing himself into the rides, he was watching her. “Are you having fun, Mom?” he asked finally, by which he meant, Is this really your or anyone else’s idea of fun? “I realized I had been busted,” said Barbara [Sam’s mom]. (397)

Yes. It actually is many people’s idea of fun. What interests me here is Barbara’s reaction. Why is she busted? The correct answer is ‘No, I’m here for you, and I’d be happy if you were having fun. A lot of kids find this kind of thing fun and I thought you might as well, but it is clear you aren’t.’

The idea that you as a parent have to not only take them on but also enjoy their kid activities is so toxic. Kid activities, like Trix, are for kids.

Then his mother realized that SBF was instead interested in talking about real things.

“I told him I was giving some paper, and he asked, ‘What’s it on?’ ” I gave him a bullshit answer, and he pressed me on it, and by the end of the walk we were in the middle of a deep conversation about the argument. The points he was making were better than any of the reviewers’. At that moment my parenting style changed.” (404)

This must have been so amazing. I can’t wait for this to happen to me with my kids.

And yet, she says everything changed, but it does not sound like she updated enough:

One interpretation of Sam’s childhood is that he was simply waiting for it to end. That’s how he thought about it, more or less: that he was holding his breath until other people grew up so he could talk to them. (459)

One day, in the seventh grade, it slipped. His mother returned from work to find Sam alone, in despair. “I came home, and he was crying,” recalled Barbara. “He said, ‘I’m so bored I’m going to die.’ (479)

By high school Sam had decided that he just didn’t like school, which was odd for a person who would finish at the top of his class. He’d also decided that at least some of the fault lay not with him but with school. (498)

School drove SBF away from books by forcing him to engage in stupid ways with stupid books, which SBF would later justify with arguments about information density.

In elementary school he’d read the Harry Potter books over and over. By the eighth grade he had stopped reading books altogether. “You start to associate it with a negative feeling, and you stop liking it,” he said. (505)

Seriously, what the actual f*** was SBF doing in a high school? (Also, why would he want to read the Harry Potter books a second time, one of the deeper unsolved mysteries remaining?) He was doing philosophy better than philosophers, off the cuff. He was bored out of his mind.

At their semi-famous deeply philosophical family dinners, SBF would hold his own against various guests. He wanted to talk and think about real things.

His parents decided to… send him to a more competitive high school?

They quite obviously should have instead sent him to Stanford.

Not that college ultimately went so well for Sam.

Two years of college classes and the previous summer’s internship, during which he’d helped MIT researchers with their projects, had killed that assumption. During college lectures he’d experienced a boredom that had the intensity of physical pain.

If he had been four years younger, perhaps it would have gone better.

Maybe they should have made him a gamer instead?

In sixth grade Sam heard about a game called Magic: The Gathering. For the next four years it was the only activity that consumed him faster than he could consume it. (539)

Or let him found a business or write or otherwise do something real.

Instead they did none of that. You give a child endless bullshit that can’t keep him engaged? He’s going to call you on it, whether it is an amusement park ride or pretentiousness.

The very first question on the final exam set him off. What’s the difference between art and entertainment? “It’s a bullshit distinction dreamed up by academics trying to justify the existence of their jobs,” wrote Sam, and handed the exam back. (532)

Or… Shakespeare? Here’s the now-famous quote.

I could go on and on about the failings of Shakespeare . . . but really I shouldn’t need to: the Bayesian priors are pretty damning. About half the people born since 1600 have been born in the past 100 years, but it gets much worse than that. When Shakespeare wrote almost all Europeans were busy farming, and very few people attended university; few people were even literate—probably as low as ten million people. By contrast there are now upwards of a billion literate people in the Western sphere. What are the odds that the greatest writer would have been born in 1564? The Bayesian priors aren’t very favorable. (519)

This is classic SBF thinking. Choose some considerations, ignore others opinions entirely and treat them as dumb, answer a different question than everyone else is asking, completely disregard aesthetics and history and really all context of any kind.

Here is early SBF doing philosophy to explain why the math says that actually murder is usually bad.

There are lots of good reasons why murder is usually a really bad thing: you cause distress to the friends and family of the murdered, you cause society to lose a potentially valuable member in which it has already invested a lot of food and education and resources, and you take away the life of a person who had already invested a lot into it. (602)

In the end murder is just a word and what’s important isn’t whether you try to apply the word to a situation but the facts of the situation that caused you to describe it as murder in the first place. (607)

Murder is just a word.

So is fault?

“I’m a utilitarian,” [SBF] wrote. “Fault is just a construct of human society. It serves different purposes for different people. It can be a tool to discourage bad actions; an attempt to recover pride in the face of hardship, an outlet for rage, and many more things. I guess maybe the most important definition—to me, at least—is how did everyone’s actions reflect on the probability distribution of their future behavior?” (1,797)

Indeed, why would any of those other aspects matter? Why stay stuck in the past?

SBF bites all the bullets, all the time, as we see throughout. Murder is bad because look at all the investments and productivity that would be lost, and the distress particular people might feel. I hope there isn’t too much on the other side of that ledger today. Luckily this one stayed theoretical as far as we know, but it sure sounds like SBF has no ‘on principle’ objection to killing an innocent person if they were to be standing in his way and it was especially important to not miss his next meeting. Or they kept saying inconvenient things.

To be fair, although I think the quote is more insightful without it, I did delete an important sentence in between those two quotes, which was:

But none of those apply to abortion. (602)

Why Was That Guy So Misaligned?

Since everything these days is about AI, consider SBF as a misaligned AGI (or NGI?).

Was Sam born a criminal? Of all Sam’s characterizes in the list I open with, one of the few that is conspicuously missing involves doing juvenile crimes.

Why would he? There was no point. Crime looked boring. Until it didn’t.

Having spent his entire childhood bored out of his mind, with no goal or utility function beyond not being bored, and having been sufficiently disabused of the virtues of academia, Sam did not know what to do. What would be a worthy goal or activity? Here we had this super smart person, lacking the motivations and interests that drive ordinary people, at a loss. What to do?

In stepped Will MacAskill, who suggested the goal of Number Go Up.

The argument that MacAskill put to Sam and a small group of Harvard students in the fall of 2012 went roughly as follows: you, student at an elite university, will spend roughly eighty thousand hours of your life working. If you are the sort of person who wants to “do good” in the world, what is the most effective way to spend those hours? It sounded like a question to which there were only qualitative answers, but MacAskill framed it in quantitative terms. He suggested that the students judge the effectiveness of their lives by counting the number of lives they saved during those eighty thousand hours. The goal was to maximize the number. (819)

“The demographics of who this appeals to are the demographics of a physics PhD program,” he said. “The levels of autism ten times the average. Lots of people on the spectrum.” (849)

The utilitarian part of this equation, the part where people don’t matter as individuals including himself, was already there.

The notion that other people don’t matter as much as I do felt like a stretch,” he said. “I thought it would be bizarre even to think about.” (584)

Of course from your perspective you must in important senses care about yourself more than other people. You must care about those around you, close to you, in a different way than others. Without this both your life and also society fall apart, the engine of creation stops, the defectors extract everything, and so on. The consequences, the utilitarian calculus, is self-refuting.

Even more than that, if you take such abstractions too seriously, if you follow the math wherever it goes without pausing to check whether wrong conclusions are wrong? If you turn yourself into a system that optimizes for a maximalist goal like ‘save the most lives’ or ‘do the most good’ along a simple metric? What do you get?

You get misaligned, divorced from human values, aiming for a proxy metric that will often break even on the margin due to missing considerations, and break rather severely at scale if you gain too many affordances and push on it too hard, which is (in part, from one perspective) the SBF story.

Yet SBF did not take such concerns seriously. As many (very far from all!) others I know in EA do not take such concerns seriously. The math is treated as real, the metric as the map as the territory, and so on.

MacAskill set SBF on a maximalist goal using an abstracted ungrounded simplified metric, hoping to extract a maximal amount of SBF’s resources for MacAskill’s (on their face altruistic) goals.

Did MacAskill understand the inevitable result? Would he have approved of the actions taken, or the consequences of them? No.

Nor do I expect he people who set other people and systems on such paths, most of the time, to appreciate what they are doing or what the consequences will be, either.

That does not change what MacAskill did: He took young SBF, a powerful agent, a proto-utilitarian in want of a functioning definition of utility and a willingness to bite all of the bullets and ignore all of the unprincipled reasons not to be a terrible person doing terrible things, and gave him a maximalist utility function to save the most lives possible (or do the most good possible, as defined by things that can be quantified and measured, and then added up linearly, with no risk aversion).

Then he pointed out and argued explicitly that the way to do that was via instrumental convergence. Rather than doing good or saving lives directly, you could maximize money, and then spend the money to do the good or save the lives. Which meant that SBF’s behaviors should look no different from anyone looking to make money, except when you give the money away afterwards. That was the intended path.

And then that led SBF directly into contact with finance and trading, and their zero-sum-style competitions, and to move from chess and Magic to trading as his puzzle of choice.

What happened with SBF will happen with an AI given a similar target, in terms of having misalignments that start out tolerable but steadily grow worse as capabilities increase and you face situations outside of the distribution, and things start to spiral to places very far than anything you ever would have intended.

Imagine a world in which SBF’s motivations had even less anchors to human intuition, and also he had a much larger capabilities advantage over others (say he was orders of magnitude faster, and could make instantiations of himself?) and he had acted such that the house of cards had not come crashing down, and instead of taking the risks and trying to score object-level wins prematurely he had mostly instead steadily accumulated more money and power, until no one could stop him, and his inclination to risk all of humanity every time he felt he had a tiny edge under some math calculation.

Which for a while was kind of fine, because he’d landed at Jane Street (which we’ll cover soon), where they had strong alignment and good supervision of their agents, and where the way to succeed and make money and climb the incentive gradient was to be socially responsible and honest and manage risk and make money straight up and pretend to be a normal person with normal tones of voice and facial expressions. So he did his best on all such fronts, and for a while it was fine, or fine-ish.

Then he left Jane Street for crypto, where fraud was par for the course, because that was also where he could make the most money, which was what MacAskill told him to do. A world full of fraud, and a world vulnerable to fraud, where everyone was constantly breaking the rules and laws.

Then, as they always do, cheating, lying and fraud fed upon themselves. Get away with a little, feel the rush, get reinforced, update you can get away with a little more. Grow contempt for the rules. Rince. Repeat. Once the doom loop starts, it rarely stops until the inevitable blow-up.

The rest is the last few chapters of the book.

Also notice how little anyone did to try and stop him, despite all the giant fire alarms, other than those he directly attacked before he was ready to do so.

Will All of This Happen Again?

We are still doing this.

We are taking many of the brightest young people. We are telling them to orient themselves as utility maximizers with scope sensitivity, willing to deploy instrumental convergence. Taught by modern overprotective society to look for rules they can follow so that they can be blameless good people, they are offered a set of rules that tells them to plan their whole lives around sacrifices on an alter, with no limit to the demand for such sacrifices. And then, in addition to telling them to in turn recruit more people to and raise more money for the cause, we point them into the places they can earn the best ‘career capital’ or money or ‘do the most good,’ which more often than not have structures that systematically destroy these people’s souls.

SBF was a special case. He among other things, and in his own words, did not have a soul to begin with. But various versions of this sort of thing are going to keep happening, if we do not learn to ground ourselves in real (virtue?!) ethics, in love of the world and its people.

All of this has happened before. If we are not careful, all of this will happen again.

Was there a reckoning, a post-mortem, an update, for those who need one? Somewhat. Not anything like enough. There was a rush to deontology that died away quickly, mostly retreating back into its special enclave of veganism. There were general recriminations. There were lots of explicit statements that no, of course we did not mean that and of course we do not endorse any of that, no one should be doing any of that. And yes, I think everyone means it. But it’s based on, essentially, unprincipled hacks on top of the system, rather than fixing the root problem, and the smartest kids in the world are going to keep noticing this. We need to instead dig into the root causes, to design systems and find ways of being that do not need such hacks, while still preserving what makes such real efforts to seek truth and change the world for the better special in the first place.

Then we are going to do the same thing with Artificial General Intelligence. Make it an agent, give it a maximalist goal that becomes misaligned out of the intended distribution that ignores key second-order and ethical considerations and inherently is incompatible with the necessary safeguards, and unleash upon the world. It will not end well for us. And that could even be thought of as the good scenario, where we are able to point the thing towards anything at all.

So yes. Remember and beware the tale of Sam Bankman-Fried. Not to blame or to label, but to learn from it. Do not let history repeat itself.

Behold the Power of Yup

Sam’s great persona transformation, the book says, was when Sam realized that he should stop trying to make his words have meaning or map in any way to reality, and instead focus purely on agreeing with everything anyone said and telling people what they want to hear.

But he wasn’t going to change human nature, and so he decided that, going forward, he would bury any negative reactions he had to anything anyone said or did. He would give human beings with whom he interacted the impression that he was far more interested in whatever they were saying or doing than he actually was. He’d agree with them, even if he didn’t. Whatever idiocy came from them, he’d reply with a Yuuuuuuppp! “It comes with a cost, but it’s on balance worth it,” he said. “In most ways, people like you more if you agree with them.” He went from being a person you’d be surprised to learn approves of you to a person you’d be surprised to learn that, actually no, he doesn’t. (1,834)

The claim is that this completely brazen strategy flat out works, including when dealing with the rich, famous and powerful. Could it be this easy?

“Yup” was Sam’s go-to word, and the less he’d actually listened to whatever you’d just said, the longer he drew it out. Yuuuuuuuuup. (209)

This is a great reverse tell, because normally extending your ‘yup’ means that you are aware of the gravity of the situation.

The best part of a strategy where your entire plan is to agree with everything is you do not need to listen to what people say or have the slightest interest in it.

Sam was game to talk to anyone—so long as he could play a video game while doing it. Sam went from being totally private to being a media whore. (161)

I’ve come around to the video game playing being genius. By trying to also play games that will not wait for you, like Storybook Brawl and League of Legends, SBF had a constant look that he was engaged and paying close attention. That is a hard thing to fake. Better to make it real. Also you get to play the video games.

What Sam also did frequently was to talk completely unguarded. Most famously on Odd Lots there was The Box (link goes to episode, if you don’t know I won’t ruin it for you) but such statements were common.

This also went for philosophy, as when he told Tyler Cowen he would take a 51% coinflip to double or destroy the Earth, and then keep taking the flips until everyone was dead. Reminds me of Trump, who will lie right to your face but will sometimes be honest about lying right to your face, which some people find endearing.

Thus, Sam in some ways had a reputation for honesty.

“Sam was unlike anyone else, in that when he stated his opinion he did it with exactly his level of confidence, which is often very high,” recalled Adam Yedidia. (1,250)

Often he’d leave her feeling uneasy about how much he’d disclosed, and to perfect strangers. “There are some times I told him in the early days, You don’t have to be so honest. In crypto everyone bluffs. Sam is always, Let me show you my last card.” (3,457)

Yes, Sam was often very (over)confident, and said so. He also constantly lied to everyone’s face and told them what they want to hear.

I wonder if this was the trick to getting an actual Sam opinion. If he does not give you a probability, look out, he wasn’t even listening to you, sorry man. If he does give you a probability, then sure you have to recalibrate it but probabilities might be a sort of sacred trust, and also it is much harder to know what you want to hear.

It also helps to be graded on the crypto curve. Do you have any idea how little you could trust the word of anyone in crypto in 2018? A simple ‘you follow professional norms and honor the word done in a trading chat’ backed with halfway decent execution went a long way.

The book claims that the PR campaign really was a pure ‘let Sam be Sam’ where being Sam meant this superficially agreeable persona who took meetings while playing video games, combined with a willingness to talk remarkably frankly about technical details. They say that Natalie, the woman charged with PR and Sam’s calendar despite having zero relevant experience, tried to call in professional help, but the professionals they did nothing.

To help her in her new and unfamiliar role as head of FTX public relations, Natalie had called a New York public relations firm called M Group Strategic Communications. Its head, Jay Morakis, was at first wary.

“I thought maybe it was some shady Chinese thing,” he said. But then he heard Sam’s pitch, and watched Sam’s first big public appearance, on Bloomberg TV. “Whatever the closest thing in my PR experience has been to this, nothing is close,” he said. “I’m fifty years old. I’ve had my firm for twenty years and I’ve never seen anything like it. All my guys want to meet Sam. I have CEOs calling me and asking: Can you do for us what you did for Sam?” He’d had to explain, back in 2021, that he actually hadn’t done anything. Sam had just sort of . . . happened. (2,025)

Patrick McKenzie is in awe of this paragraph.

Patrick McKenzie (recently): We get more than a chapter with a twenty-something Taiwanese aide de camp who has no PR experience. Lewis would have you believe she single-handedly managed calendar, juggled magazine cover shoots, and put on conferences featuring e.g. Clinton.

And we get one paragraph where the CEO of a strategic comms consultancy shows up to modestly refuse any credit, then exits from the narrative at the speed of light.

*shakes head in confused mix of exasperation and professional regard*

God they’re good.

The subtext, and is it even subtext, in the above is that: it is resoundingly unlikely one person, no matter talent level nor devotion, achieved the results that operation achieved. At some point you are limited by keystrokes possible per day.

And then the sort of stunning level of chutzpah it takes for the strategic consultancy to get a deniable PR hit *in a Michael Lewis book cataloging their client’s implosion.* Which is carefully calibrated to say everything it needs to say to people who have budget authority.

Patrick McKenzie (11/17/22): An offhand comment on the topic of the day, from a comms professional: if you look at the number of interviews in prestige publications, the timing of them, the magazine covers, and the glowing coverage *post implosion*, I think you start to perceive the dark matter of a PR firm.

I wish I knew who it was, and I’m not sure the conclusion would be “Never work with them” or “Definitely work with them; the apparently have the ability to root any media org they want at any time given any facts.”

They also seem to be intensely loyal to their client even though he is very, very clearly not listening to their advice.

An aside: some of the pieces which read to the general public as puff pieces read to journalists as hit pieces, for complicated cultural reasons. There is a language to these things, like there is a language to LessWrong.

I am going to go ahead and agree with Patrick McKenzie that we saw, especially in the aftermath of FTX’s collapse, what he described as ‘the dark matter of a PR firm.’ I do not buy the story that there were no public relations professionals involved, that Sam went out and said ‘yep’ a lot while playing video games, driving a Corolla and having a net worth of $20 billion, while one person with no experience did all the arrangements and scrambling, and everyone loved him and every press source treated him with kid gloves and all that.

The system is hackable. It is not that hackable. The parts of Sam’s public relations operations I did have interaction with were very much conscious of exactly how public relations works via their well-compensated expert consultants. It would have been completely insane to do things any other way. Even by SBF standards.

This is one of many places where I am confident the events the book describes happened, and I am also confident that there is quite a lot of ‘dark matter’ that is being left out, at least a lot of which Michael Lewis never found. Some of it I get a chance to mention here. Definitely not all, not even of the parts I know about.

Now for the chronological story of how this all played out.

First stop, Jane Street Capital.

Jane Street Capital

This is the part of the story I am best able to fact check. I too worked at Jane Street Capital, and directly witnessed a lot of this part of the story.

I also am deeply thankful to Jane Street Capital. It was not a fit for me in the end, but it was a pretty great place to work and they treated me right. I am not about to spill their secrets in ways they would not want such secrets spilled. I can safely say that this chapter is not entirely accurate, but the inaccuracies do not bear strongly on the SBF story, so I will decline to elaborate further.

SBF does not have that kind of ethical code. He was happy, on top of all SBF’s actions at the time, to share a bunch of details with Michael Lewis.

I will say that the description of the interview process was spot on, I very much enjoyed my shot at it, and that I can totally believe SBF got the high score.

Jane Street offered him a summer internship. So for that matter did the other high-frequency trading firms that had invited him to apply. One firm had halted their interview process midway through and announced that Sam had done so much better at their weird games and puzzles than every other applicant that there was no longer any point in watching him play. (769)

As I have said before, I recommend going through the Jane Street interview process, even if you do not think you have much chance of being hired. It is great.

Matt Levine has extensively discussed the Asher Incident.

What matters to the broader story is that this Asher guy was another intern who offered SBF a bet without thinking through the implications, offering SBF a chance to both make about $33 in expected value and also humiliate Asher, and oh boy did SBF take full advantage, continuing to rub it in well past the point where he had secured his profit and was doing anything other than rubbing it in Asher’s face.

“It was not like I was unaware I was being a piece of shit to Asher,” he said. “The relevant thing was: Should I decide to prioritize making the people around me feel better, or proving my point?” (951)

This is such a strange response. There was quickly nothing left to prove once Sam pointed out Asher’s mistake. Sam did not prioritize proving his point over having others feel better. He prioritized making Asher feel worse. That is not cute nerd indifference to social cues. Michael Lewis pretends not to notice the difference.

The actual bet was on the maximum loss by any Jane Street intern that day from gambling. Interns were encouraged to bet and make markets against each other, with a maximum loss of $100 per day so no one got seriously hurt. SBF bought a contract for $65 that paid out equal to the maximum loss, then (of course) paid $1 to get another intern to flip a coin for, oh, about $99. Then, for no good reason, he did it two more times.

Matt Levine and his readers point out that there was a better version of the trade, which was to pay $1 to get two interns to flip against each other. What he forgets is that SBF is not one to shy from variance. What was not pointed out was the fun problem that if SBF had lost the initial coin flip, then Asher could have claimed that since SBF won his bet with Asher, he hadn’t lost the full $100. The bet is now self-referential. And SBF couldn’t have clarified this before accepting the bet without tipping Asher off to why SBF would always win.

What is the result? Presumably you need the result where the contract resolves correctly, Sam has to lose what the contract pays out, so it is the halfway point and resolves $82.50, paying out $17.50? And by flipping the coin himself, Sam gave up a quarter of his expected profit?

The bosses were not happy, and thought Sam needed to learn to read the room.

Sam thought his bosses had misread his social problems. They thought he needed to learn how to read other people. Sam believed the opposite was true. “I read people pretty well,” he said. “They just didn’t read me.” (951)

It had not occurred to the bosses, presumably, that Sam could read others fine. The problem was that Sam did not care [LW · GW]. Either way, Sam was right that his lack of ordinary facial expressions was a problem.

So Operation Ordinary Facial Expressions was born.

Just because he didn’t feel the emotion didn’t mean he couldn’t convey it. He’d started with his facial expressions. He practiced forcing his mouth and eyes to move in ways they didn’t naturally. (1,014)

Here’s one anecdote I can confirm, and it was as crazy as it sounds:

Every time Brazil won a World Cup match, the Brazilian stock market tanked, for instance, because the win was thought to increase the shot at reelection of Brazilian president Dilma Rousseff, perceived to be corrupt. (1,111)

The book then describes a refinement or extension of this trade, which I would not choose to confirm or deny regardless of whether it happened. What if SBF could simply predict the outcome of the 2016 election an hour ahead of everyone else?

That is, it would be surprising if Jane Street couldn’t learn the results of the presidential election before anyone else in the financial markets or for that matter the entire world. (1,123)

There then follows a section on Sam’s version of what happened with the 2016 presidential election, where he and Lewis both draw all the wrong conclusions (even if you were to believe the exact story presented), and became convinced that he could be the best like no one ever was and make all the money and Jane Street was not maximizing enough and thus holding him back.

Then a bit later, SBF decided to leave Jane Street, because he discovered the Japan and South Korea Bitcoin arbitrage trade, and he wanted the trade all to himself.

This is one place I will introduce myself into the story a tiny bit. When Sam decided to quit, the two of us went for a walk in the park. He said he was leaving to run or at least help run CEA, the Center for Effective Altruism.

Which was not a crazy fit. Sam was clearly deeply into EA and the thesis that he could be a major upgrade there seemed plausible, as did the possibility that from his perspective this could be high leverage. I was confused by his decision, Jane Street seemed like a better fit for him, but we strategized a bit about how good could be done and I wished him the best of luck.

As we all know now, he was, as with everyone else, lying right to my face.

During his final weeks at Jane Street, Sam traveled to Boston just to tell Gary about his plan to make a billion dollars trading crypto for effective altruistic causes. (1,423)

He admit it. He was not leaving to join CEA. He was leaving to pursue the Japan trade. And he had decided that I was not someone he wanted to bring in on that.

I’d also note Sam wrote this:

“but [my coworkers] show no interest in seeing who I really am, in hearing the thoughts I hold back. The more honest I try to make our friendships, the more they fade away. No one is curious. No one cares, not really, about the self I see. They care about the Sam they see, and what he means to them. And they don’t seem to understand who that Sam is—a product of thoughts that I decide people should hear. My real-life twitter account.” (1,205)

I am not saying I made the most robust effort to build a close friendship with Sam, but I was right there, happy to talk to him before he was him, culturally rather adjacent sharing many of his interests, and (I like to think) very clearly able to keep a secret. We had him over for dinner once, and by my wife’s recollection he didn’t say two words to her, nor did he eat any of the food (yes we’re not vegans but we do make efforts to accommodate), while trying to get me to disassociate with Ben Hoffman because he was making concrete criticisms of EA and such criticisms hurt the cause. So, yeah.

On reflection, it was vastly overdetermined that Sam had no reason to tell me what was going on. Why take that risk? I clearly was not about to work 18 hour days in Berkeley, California. It was unlikely I would have let him own 100% of the firm. I was too old and busted, a ‘grown-up’ he had no use for, and ultimately a rationalist is not an EA who will trust Sam completely, so in many ways I was the opposite of EA. Why take the risk that I might not keep his confidence?

Looking back now, of course, I am for my own sake deeply happy that he did not attempt to take me with him. How might things have been different if I had somehow ended up going with him? Would I have been able to steer things to turn out differently? Would I have been only another person who left with the management team, or another witness on the stand, or could I have helped steer the ship? Would I have perhaps managed to block the Anthropic investment? We will never know.

Without loss of generality or confirming any of the other bits, there are too many different things to get into it all, I’d also like to dispute this (very gentle?) slander:

By Wall Street standards, Jane Street was not a greedy place. Its principals did not flaunt their wealth in the way that the guys who had founded other high-frequency trading firms loved to do. They didn’t buy pro sports teams or hurl money at Ivy League schools to get buildings named for themselves. They were not opposed to saving a few lives. But Jane Street was still on Wall Street. To survive, it needed its employees to grow attached to their annual bonuses, and accustomed to their five-bedroom Manhattan apartments and quiet, understated summer houses in the Hamptons. The flood of effective altruists into the firm was worrisome. (1,312)

That is quite a rich thing to say. The employees I knew in no way felt stuck trading in order to support their lavish lifestyles. They traded because they very much enjoyed it, were good at it, liked the team they were on and so on. Many indeed were and are largely altruists, and wanted to do good as effectively as possible. Not being ostentatious was not an act.

It was the flood of effective altruists out of the firm that was worrisome. It was the effective altruists who were the greedy ones, who were convinced they could make more money outside the firm, and that they had a moral obligation to do so. You know, for the common good. They proved themselves neither honest nor loyal. Neither was ‘part of their utility function.’

All right. On to Alameda.

Soiling the Good Name of Alameda County

All time great chapter opening. Again, the man can write.

It took only a couple of weeks of working for Sam before Caroline Ellison called her mother and sobbed into the phone that she’d just made the biggest mistake of her life. (1,271)

Caroline had many very good instincts throughout. If only she had followed them.

Over coffee in Berkeley, Sam was cagey about what he was up to. “It was, ‘I’m working on something secret and I can’t talk about it,’ ” recalled Caroline. “He was worried about recruiting from Jane Street. But after we talked a while, he said, ‘I guess maybe I could tell you.’ (1,295)

Oh yes, Sam, famously worried about recruiting from Jane Street.

In late March she started the job. The situation inside Alameda Research wasn’t anything like Sam had led her to expect. He’d recruited twenty or so EAs, most of them in their twenties, all but one without experience trading in financial markets. (1,337)

Why did he recruit EAs? Partly because he thought EAs would work infinite hours for almost no pay and still be worthy of and provide limitless trust. Exploit the recruits for cheap labor, without even the pretense of a non-profit.

Anyone who started a crypto trading firm would need to trust his employees deeply, as any employee could hit a button and wire the crypto to a personal account without anyone else ever having the first idea what had happened. Wall Street firms were not capable of generating that level of trust, but EA was. (1,402)

That would explain how Alameda could lose money trading crypto in large parts of 2018, despite it being extremely difficult to lose money trading crypto in 2018 if you know how trading works. It could also explain why Sam wanted to rely on his bot program. No one knew how to trade!

Alameda started out with the arbitrage trade with South Korea and Japan. It is not clear the extent to which they managed to take advantage of it - the book describes them as getting only secondary, much less profitable versions of it, Sam debating various wild schemes to do better but not pulling the trigger on them because they were too absurd even for him, and ultimately the opportunity vanishing.

It wasn’t Sam’s first thought, but he considered buying a jumbo jet and flying it back and forth from Seoul, filled with South Koreans carrying suitcases each holding $10,000 worth of won, to a small island off the coast of Japan. “The problem is that it wasn’t scalable,” said Sam. “To make it worthwhile, we needed like ten thousand South Koreans a day. And we probably would have attracted so much attention doing it that we would have been shut down. Once the South Korean central bank saw you with ten thousand South Koreans carrying suitcases full of won they’d be like, There’s going to be a new direction here.” Still, he was tempted. (1,483)

Which brings us to the bot.

The bot in question was called Modelbot. I would have simply called it Arbbot.

The idea was simple, and also very much something I would have tried. There were a lot of different exchanges trading a lot of cryptos at lots of different prices. Sometimes the prices were different. When that happened, you could do various forms of arbitrage (and statistical arbitrage). Sam, being a trader and also being Sam, was a big fan of taking the free money.

What made it extra attractive was that, while Sam had declined (or failed) to hire actual traders, he did manage to hire a world class programmer, and he did manage to raise capital while exploiting the country arbitrage trade.

They didn’t blow up, not at first. Those first few weeks, they made no real money, but then they had only a few people and Sam’s bonus money. By the end of December, they’d hired a bunch of people and raised $25 million in capital. Gary, basically all by himself, had written the code for an entire quantitative system. That month they generated several million dollars in profits. In January 2018 their profits rose to half a million dollars each day, on a capital base of $40 million—whereupon an effective altruist named Jaan Tallinn, who’d made his fortune in Skype, handed them $130 million more to play with.

So Sam built Modelbot to do exactly that, and he would have gotten away with it too except for those meddling (Effective Altruist) kids.

He had not been able to let Modelbot rip the way he’d liked—because just about every other human being inside Alameda Research was doing whatever they could to stop him. “It was entirely within the realm of possibility that we could lose all our money in an hour,” said one. One hundred seventy million dollars that might otherwise go to effective altruism could simply go poof. (1,368)

Not ‘170 million dollars of our investors money.’ Not ‘our only opportunity to trade, obviously we wouldn’t get another.’ Not even ‘and if we lost it like that I can’t help but wonder if we’d have some legal or other problems to deal with.’

No, this was 170 million dollars ‘that might otherwise go to effective altruism.’

Something is deeply, deeply wrong with that picture. Although not as wrong as Sam’s part of the picture.

[One] evening, Tara argued heatedly with Sam until he caved and agreed to what she thought was a reasonable compromise: he could turn on Modelbot so long as he and at least one other person were present to watch it, but should turn it off if it started losing money. “I said, ‘Okay, I’m going home to go to sleep,’ and as soon as I left, Sam turned it on and fell asleep,” recalled Tara. From that moment the entire management team gave up on ever trusting Sam. (1,372)

I mean, good on the management team for fully updating on trusting Sam, although not fully updating on ‘this person needs to be removed immediately if not sooner,’ assuming Tara’s account is accurate, and the book does not say that Sam disputes it, nor does it seem remotely inconsistent with other Sam things. Turning on the bot after promising not to is bad enough, but turning on a new bot and then falling asleep with no one else watching it? Yeah, that is another planet of not okay.

That is not the weird part of the story. The weird part of the story is, why was it non-trivial to test whether or not the bot worked?

Any bot you would ever dare turn on has various risk limits. You can turn the bot on, with very low limits on how much it is allowed to trade. Do the trades small. See if you end up with more money than you started with, in the same places it started. If you can do that, you can start slowly ramping the numbers up. Standard procedure. If you can’t do that, you haven’t finished programming your bot, so get on that. Have multiple people watching at all times, analyzing the trades, seeing if things make sense, refining your algorithms as you go.

Instead, the claim is that Sam turned the program on with no one watching, without any reason not to wait, then went indefinitely with no way to test whether the program would actually work if one turned it on. I notice I am confused.

Alameda, without a profitable bot and without the arbitrage trade, started bleeding money as per the book’s own report, and they were paying very high interest rates to borrow money. Things did not look so good and were escalating quickly.

Then comes the story of the missing Ripple. They were supposed to have $4 million worth of Ripple. Then they lost it. No one knew where it was. What to do?

Sam’s attitude was that the Ripple would probably turn up, so no duty to the investors to say anything, no need to worry, carry on your day. Others were, understandably, rather more concerned?

After the fact, if we never get any of the Ripple back, no one is going to say it is reasonable for us to have said we have eighty percent of the Ripple. Everyone is just going to say we lied to them. We’ll be accused by our investors of fraud. That sort of argument just bugged the hell out of Sam. He hated the way inherently probabilistic situations would be interpreted, after the fact, as having been black-and-white, or good and bad, or right and wrong.

Remind you of anything that’s going to happen in the second half of the book? Yes, it turns out that if you tell people everything’s fine but you have reason to know it very well might not be fine, often that would constitute fraud. You cannot, in general, simply not mention or account for things that you’d rather not mention or account for.

So between Sam asking everyone to work 18 hour days all the time, and being a generally irritable and horrible person to work for, and being completely untrustworthy and risking all the money for no reason, and misplacing $4 million in Ripple and then proposing to act like that hadn’t happened, and also for the company bleeding money and a bunch of other stuff, for some strange reason, Sam’s entire management team decided they had enough and wanted Sam out.

The management team ran into a problem. Thanks to them taking ‘I promise I’ll get to that later, we need to move fast’ as an explanation, Sam owned the entire company. Somehow everyone had allowed this.

The book’s account also claims the offer had an absurd clause that was completely unsingable, aiming to bankrupt Sam outright. Not typically how one gets to yes.

For a start, Sam owned the entire company. He’d structured it so that no one else had equity, only promises of equity down the road. In a tense meeting, the others offered to buy him out, but at a fraction of what Sam thought the firm to be worth, and the offer came with diabolical fine print: Sam would remain liable for all taxes on any future Alameda profits. At least some of his fellow effective altruists aimed to bankrupt Sam, almost as a service to humanity, so that he might never be allowed to trade again. (1,555)

What? Liable for all taxes on any future Alameda profits? As fine print they hoped Sam wouldn’t notice, perhaps? That is the most absurd ask I have ever seen. Sam obviously would rather light the entire enterprise on fire than agree to that. I have no knowledge here one way or the other, but I have to assume this is not a complete and accurate description?

The book confirms that the whole thing seemed pretty nuts the way it is described.

The conversations we had were absolutely fucking nuts,” he recalled. “Like to what extent Sam should be excommunicated for deceiving EAs and wasting EA talent. And like ‘the only way Sam will learn is if he actually goes bankrupt.’ They told our investors he was faking being an EA, because it was the meanest thing they could think to say.” Ruining Sam wasn’t enough, however: they expected to be paid on their way out the door. “They wanted severance, even though they were quitting and it was a money-losing operation in which they didn’t have a stake,” said Nishad. “They were saying that Sam needed to buy them out and they were worth more than one hundred percent of the value of the entire company because Sam was a net negative.” (1,575)

And now all these unprofitable effective altruists were demanding to be paid millions to quit—and doing whatever they could to trash Sam’s reputation with the outside world until they got their money. (1,584)

That is not typically how any of this works. Quitters do not typically get severance. Quitters definitely do not typically get more than 100% of the value of the entire company. If someone demands you buy them out for more than the company is worth, and they accurately describe how much the company is worth, presumably you say ‘wait, that is more than the company is worth, why would I ever pay that?’

To his credit, Nishad noticed that he was deeply confused.

It occurred to Nishad that the effective altruist’s relationship to money was more than a little bizarre. Basically all of Alameda Research’s employees and investors were committed to giving all their money away to roughly the same charitable causes. You might surmise that they wouldn’t much care who wound up with the money, as it all would go to saving the lives of the same people none of them would ever meet. You would be wrong: in their financial dealings with each other, the effective altruists were more ruthless than Russian oligarchs. Their investors were charging them a rate of interest of 50 percent. “It wasn’t a normal loan,” said Nishad. “It was a shark loan.” In what was meant to be a collaborative enterprise, Sam had refused to share any equity (1,578)

Ah, yes, the funders demanding 50% interest. I can take this one. Loaning money to even a relatively responsible crypto firm is highly risky and, typically, deeply stupid. This is not 2022-hindsight, back in 2018 I was trading for a crypto firm and we had borrowed money and I remarked that I had no idea why anyone had voluntarily loaned us any.

Invest in a crypto trading firm? Sure, maybe. Could work. Big upside. But why would you instead loan money to a crypto firm, where if Number Go Up you get a modest interest payment and if Number Go Down your number goes down to zero?

The ideal answer is that you don’t. If you must, earn enough interest that it is worth it. A rate of 50% seems if anything a bit low.

The whole idea of the EAs who left ‘trashing Sam’s reputation’ is treated as a big deal here, and as the reason the funders cut back a lot in size. But I never heard the complaints until FTX was blowing up? Most I know didn’t hear them? Given how big FTX was in EA spaces, does it seem a bit weird that this massive reputation-trashing operation went so unnoticed? They certainly had plenty of good material to work with. If they’d presented the facts as laid out in the book, that seems like enough?

Why do we get to flash forward to this, after it all fell apart:

From the start, Zane had been enthralled by Sam, and by the empire he might create. But he hadn’t signed up to the cause blindly. Before joining FTX, he’d consulted his old friends in crypto. CZ was one of them. “It was CZ who told me about him,” he now recalled. “He said, ‘I think that’d be a really good option for you.’ People have asked me, ‘How did you come to trust Sam so much?’ CZ was the start of it. But nobody had a bad thing to say about him.” Zane was the gunslinger who’d been talked into making a respectable home in the town alongside what appeared to be law-abiding folk. Lots of big crypto speculators had entrusted their money to FTX because they trusted Zane. (3,338)

Why was it that EA leadership didn’t get the message to Eliezer [EA(p) · GW(p)]?

And as for the funders that cut back in size, maybe the reason for that was that they gave Sam big size so Sam could do the one big trade, and now that it was over it made sense to scale back down?

Instead, the book says that seven figures in severance was indeed paid, Sam went on with Alameda, and then everyone kind of forgets, and by the later parts of the book Sam is seen as having an unblemished reputation.

So, you’ve had your entire management team walk out. What will you do next?

What happened next, in retrospect, seems faintly incredible. With no one left to argue with him, Sam threw the switch and let Modelbot rip. “We turned it on and it instantly started making us lots of money,” said Nishad. And then they finally found the $4 million worth of missing Ripple. (1,606)

I notice I am still confused. Modelbot instantly made a lot of money? Why didn’t they turn it on for small before? Was there no experiment to run? What happened the previous time that Sam turned it on and fell asleep? None of this makes sense.

(What happened to the Ripple was that it was improperly labeled when sent to an exchange, so it piled up there while the exchange had no idea whose it was until they finally traced the thing and figured it out, at which point the exchange yelled at them for being complete idiots but did hand over the Ripple. Very nice of them, and also pretty insane that they let things drag out that long before figuring it out.)

Those who stayed behind did not make the correct updates.

They were no longer a random assortment of effective altruists. They were a small team who had endured an alarming drama and now trusted Sam. He’d been right all along! (1,618)

Sam proved he could design a profitable arbitrage bot. And that he got lucky with his carelessness. That this time the risks paid off. Also that he was a terrible manager and team builder whose chosen management team all hated him so much after a short period that they walked out on him while actively trying to take him down.

Those who remained concluded… other things.

Sam also concluded that since EAs would not play ball, he shouldn’t hire so many EAs.

His fellow EAs’ behavior caused him to update his understanding of their probability distributions in ways that left him less willing to hire EAs. (1,805)

As much as I criticize EAs here and elsewhere, they do tend to notice when you are completely untrustworthy and your statements are not at all truth tracking. And they tend to then care about it. They often don’t actually want to work constant 18 hour days. Also, newly hired EAs were not going to be personally loyal in the way that SBF wanted.

How fraudulent was the operation? Once again: This is their 2018 deck, which claims >100% consistent annualized returns (and ‘no risk’). There is no ambiguity here.

How Any of This Worked

The crypto world was a hive of scum and villainy long before SBF got involved. There were also plenty of idealists and well-meaning, honest people, as there usually are. Those were mostly not the people getting rich, or the ones running the exchanges.

The exchanges were licenses to print money proportional to their user bases, with users who were asking all the wrong questions and no regulators or consumer watchdogs keeping them in check, so it got ugly out there.

Across Asia, new cryptocurrency exchanges were popping up every month to service the growing gambling public. They all had deep pockets and an insatiable demand for young women.

They were hiring lots of people because they could afford to, and big headcounts signaled their importance. (93)

The main thing customers demanded of crypto exchanges was, and I can confirm this, the ability to take wildly irresponsible gambles with their crypto. As in customers highly valued 100:1 leverage, using $100 of Bitcoin to buy $10,000 of Bitcoin, willing to lose it all if the price dipped by 1% for a microsecond.

People think using this leverage is a good idea. They are always wrong. Michael Lewis seems confused about this as well.

Here was one example of the games that were played: Several of the Asian exchanges offered a Bitcoin contract with one hundred times leverage. Every now and then, some trader figured out that he could buy $100 million worth of bitcoin at the same time he sold short another $100 million worth of bitcoin—and put up only a million dollars for each trade. Whatever happened to the price of bitcoin, one of his trades would win and the other would lose. If bitcoin popped by 10 percent, the rogue trader collected $10 million on his long position and vanished—leaving the exchange to cover the $10 million he’d lost on his short. (1,736)

This is mostly a no good, very bad trade, because the exchange liquidates your account while it still has positive value, and by ‘liquidate’ the exchange meant ‘confiscate all of it and write you down to zero.’ Maybe they would then actually liquidate what was there. Maybe they wouldn’t. That was up to them.

Are there versions of this trade that are good for you and bad for the exchange? Assuming, of course, that we completely ignore that it would be safe to presume all of this is market manipulation and multi-accounting and very much not legal, because lol legal, lol compliance department, this is crypto what are you even talking about.

Yes. If there were effectively no law but code, I can think of three.

You have reason to expect (or cause) Bitcoin to be even more volatile than usual, and for there to be a jump in price that gets your wrong-way trade liquidated at a negative account value. For example, if you are allowed to do this trade right before the decision on whether to allow a Bitcoin ETF, then the trade seems good.

You can size big enough to force the exchange to make big trades that impact the entire Bitcoin market. As in, Bitcoin goes up 75bps (0.75%) and they liquidate your short position (which was still worth $250k) to zero. But by doing that, they drive up the price of Bitcoin a lot more, after they have impact you sell your Bitcoins, and you end up ahead. Not impossible back in the day if they let you scale up big enough.

You can provide directly to the liquidation, and the liquidation mechanism is dumb. So when your short account is liquidated, the exchange issues market buy orders bigger than their market can bear rather than doing something less stupid, your account has various sell orders at higher prices, you fill a lot of the liquidation order at stupid prices, you quickly sell off the remainder before prices restabilize, and you laugh.

When I was trading crypto, I insisted on caring about things like not doing market manipulation, not spoofing orders and not trading when I had material non-public information (aka insider trading). This mostly made everyone else rather annoyed at me for being such a stickler for the laws of some alien world. They put up with it because, as the smoking man put it, they needed my expertise.

Wash trading was common.

Wash trading, as it was called, would have been illegal on a regulated US exchange, though the sight of it did not bother Sam all that much. He thought it was sort of funny just how brazenly many of the Asian exchanges did it. In the summer of 2019, FTX created and published a daily analysis of the activity on other exchanges. It estimated that 80 percent or more of the volume on the second- and third-tier exchanges, and 30 percent of the volume on the top few exchanges, was fake. Soon after FTX published its first analysis of crypto trading activity, one exchange called and said, We’re firing our wash trading team. Give us a week and the volumes will be real. The top exchanges expressed relief, and gratitude for the analysis, as, until then, lots of people assumed that far more than 30 percent of their volume was fake. (2,429)

I discovered this because I was trading on Binance, attempting to purchase Stellar for someone who wanted to purchase a bunch of Stellar when it was the #8 coin in the world (it is #23 now), and continuously failing to purchase any Stellar. There would be trading, I would issue a buy order where it was trading, and the entire market would mysteriously shift up. I would withdraw the order, things came back down. The market moved if you breathed on it, there was no way to get any size. It didn’t make sense until I realized that most of the trading was not real. The whole thing was a house of cards. I reported this back and the person said, yes, everyone knows there’s a lot of wash trading, I still want to buy Stellar. There’s only so much you can do.

Lewis offers a strange claim here:

Toward the end of 2018 the markets suddenly changed again. Spreads tightened dramatically, going from 1 percent to seven one-hundredths of a percent. (1,754)

I mean, no, they didn’t? I was at least kind of there, unrelatedly trading crypto. Throughout 2018 there were plenty of ways to trade rather large amounts for far less than a percent. Yes, spreads did tighten, and I am confident Alameda contributed to that tightening, but this was not an order of magnitude change.

Nor was it the final change. As time went on, spreads would tighten further. Alameda would be in a more and more competitive business. This was likely a prime motivation behind creating a crypto exchange, FTX, before Alameda lost its edge.

During this period, SBF relocated to Hong Kong, because he found that being in the room with other crypto people was very good for business. For example:

Weeks before he flew to Asia, one of the big Chinese crypto exchanges had frozen Alameda’s account with a bunch of money in it, for no obvious reason. Customer service hadn’t returned their calls. After meeting Sam in person, the exchange’s bosses handed him back his money. (1,782)

It also gave Sam access to a new labor pool, one eager to get into the game and do whatever it took without asking questions, and got everyone out of the United States.

Sam’s approach to hiring was to ensure no one ever know what they were doing, so this new pool of talent worked out great.

Faced with a necessity, Sam turned it into a virtue. “It’s a moderately bad sign if you are having someone do the same thing they’ve done before,” he said. “It’s adverse selection of a weird sort. Because: why are they coming to you?” (1,974)

Spoken like an employer who does not know how to attract the best talent, and also someone who even Michael Lewis knows is bullshitting this time.

Another potential factor in this story that is not mentioned by Lewis, and also that did not come up in my previous post, is Tether. Patrick McKenzie has the theory that Alameda’s true main business was knowing what to say to American banks to allow Tether to move capital. That they were centrally engaged in fraud on this entire additional level.

This related thread of Patrick McKenzie’s is also fun. As is this one. Or at least, you can learn more about what I (and I assume Patrick) find fun.

The next step was building a crypto exchange.

Building FTX

Sam had a secret weapon in building FTX, which is that he had a programmer that could single-handedly (so the book says) program the whole thing better than most programming teams. Knowing what I know about exchanges and engineers, this claim is a lot less wild than it sounds. At core an exchange is about getting a small number of important things right, which FTX mostly did get right except where Sam chose to intentionally get them wrong. I totally believe that a two person team, one to know what to do and the other to do it, could have pulled that off.

Before going down other funding routes, Sam tried to get CZ of Binance to pay.

The first decision CZ had to make was whether to pay Sam the $40 million he was asking for his cleverly designed futures exchange. After thinking it over for a few weeks in March 2019, CZ decided no—and then told his people to create a futures exchange on their own. Which struck Sam as such an ordinary and vaguely disappointing thing to do. “He’s kind of a douche but not worse than a douche,” said Sam. “He should be a great character but he’s not.” (1,893)

I do not really know what Sam was expecting. CZ presumably took a few weeks to think it over in order to keep optionality and get a head start, if he wasn’t already building such an exchange anyway as seems rather plausible. Luckily for Sam, he still managed to do the better execution than CZ.

The book describes CZ as a strangely conventional and unimaginative person, who created Binance and made it the dominant exchange on the planet, becoming one of the richest people on Earth, without any exceptional qualities or skills of note. Lewis makes it sound like CZ was one of many who started exchanges, and he was at the right place at the right time and things broke his way. I don’t know anything about CZ that isn’t common knowledge, but I do not buy this at all. Random people do not luck into that kind of situation. But that would be the story of a different book.

So now Sam needed money to build FTX. He had a killer programmer, but there is a lot more to an exchange than that. So it was time to fundraise.

The book talks about two ways they raised money: Selling FTT tokens, which are a cryptocurrency Sam created representing claims on a portion of FTX’s future revenue and thus effectively a form of preferred stock in FTX, and traditional VC fundraising.

The FTT story is told as a story of quick success. He starts out charging early people $0.10, then quickly that goes up quite a lot, some people get rich out of the gate, Sam is sad at what he gave away. VCs in this spot, and crypto people too, tell you not to be upset about that. You need big gains and a story to drive excitement, you still have most of the company and a ton of the tokens. You have what you need. Why fret it? Instead, Sam says later in the book he regretted creating the tokens and sold them so cheap, rather than regretting the tokens because he used them later in such crazy fashion that he blew up his whole empire.

The stock story is where SBF learns the basics of how VC works. In traditional Sam fashion, he noticed things were kind of arbitrary and dumb, then did not stop to think that they might not be as arbitrary and dumb as all that and there might be method to the madness even if it wasn’t fully optimal.

In early 2021, Jump Trading—not a conventional venture capitalist—offered to buy a stake in FTX at a company valuation of $4 billion. “Sam said no, the fundraise is at twenty billion,” recalled Ramnik. Jump responded by saying that they’d be interested at that price if Sam could find others who were too—which told you that the value people assigned to new businesses was arbitrary. (2,060)

No, this does not mean the valuation is arbitrary. That is especially true when, as was the case in FTX and most crypto companies, you politely decline to let anyone do proper due diligence, and you’re not even a traditional VC. What is going on is that Jump is quite reasonably deciding that at a fair rate they would be willing to invest, but that they are not in position to evaluate what is fair. So they outsource that to others, including to the lead whoever that might be. If VCs are willing to costly signal, via their own investment, that a $20 billion valuation is reasonable, then Jump can be in as well.

Selling a new business to a VC was apparently less like selling a sofa than it was like pitching a movie idea. (2,063)

Well, yeah, that one is largely right. They care a ton about a good story.

Then we have Sam being peak Sam.

A guy from Blackstone, the world’s biggest private investment firm, called Sam to say that he thought a valuation of $20 billion was too high—and that Blackstone would invest at a valuation of $15 billion. “Sam said, ‘If you think it is too high, I’ll let you short a billion of our stock at a valuation of twenty billion,’ ” recalled Ramnik. “The guy said, ‘We don’t short stock.’ And Sam said that if you worked at Jane Street you’d be fired the first week.” (2,084)

Sam is very much the one who gets fired in the first week here. No, you are not obligated to flip coins every time you think you have a tiny edge, especially billion dollar ones with uncapped potential losses subject to potential rampant manipulation and huge adverse selection. Nor has Sam paused to consider the cost of capital. VCs demand edges well in excess of 33% before they are willing to invest.

It is crazy, completely insane, to think that a VC willing to invest in a start-up at $15 billion would want to be short for size at $20 billion, with no market or way to cover.

Another part of the puzzle is that Sam used Alameda’s resources to create FTX, and the first VC that Sam talked to figured this and a number of other things out.

Yet presumably Sam said this because he not only thought he was right, he thought he was so obviously right it made sense to say so over the phone. That tells you a lot about Sam’s attitude towards capital, sizing, risk and other related matters, and also in believing that he knows all and everyone else is an idiot, which is more, more, I’m still not satisfied.

The Sam Show

What about physically building FTX, as in their new headquarters in the Bahamas that was never finished?

There’s a bunch of great stories in the book about the architects who got brought in to make this very expensive building, who were given no guidance, and desperately tried to figure out what their client wanted. All they got were three quick notes - the shape of an F, a side that looked like Sam’s hair, and a display area for a Tungsten Cube - that Sam also didn’t bother writing, instead someone else tried to imagine what Sam might ask for. It was all the Sam show, all about Sam all the time, very cult of personality or at least hair.

Even from their jungle huts people jockeyed for a view of him. The architects schemed the main building with glass walls and mezzanines that offered unlikely interior views of Sam. “It gives you an opportunity to catch glimpses of Sam no matter where you are sitting,” said Ian [the architect]. (2,569)

The list had been created by someone else inside FTX who’d tried to imagine what perhaps he himself might want in his new office buildings, were he Sam. Sam didn’t want his Jewfro on the side of the building. This other person had just imagined that “Jewfro on the side of the building” was the kind of thing Sam might find amusing. (2,612)

Another way he kept it all about him was not to give anyone else a title that represented what they were actually doing.

Sam then listed some reasons why this might be so: Having a title makes people feel less willing to take advice from those without titles. Having a title makes people less likely to put in the effort to learn how to do well at the base-level jobs of people they’re managing. They end up trying to manage people whose jobs they couldn’t do, and that always goes poorly. (2,644)

Having titles can create significant conflicts between your ego and the company. Having titles can piss off colleagues. (2,648)

If while reading this book you are not playing the game of noticing world-class levels of lacking self-awareness, you were missing out.

Nishad Singh failed to imagine the way things actually went south, but he did imagine a different highly plausible one that likely happened in a bunch of other Everett branches.

I’d soon be asking Nishad Singh for the same premortem I’d ask of others at the top of their psychiatrist’s org chart: “Imagine we’re in the future and your company has collapsed: tell me how it happened.” “Someone kidnaps Sam,” Nishad would reply immediately, before unspooling his recurring nightmare of Sam’s lax attitude toward his personal safety leading to the undoing of their empire.

..made for excellent ransom. “People with access to crypto are prime kidnap targets,” said Nishad. “I cannot understand why it doesn’t happen more.” (2,736)

People don’t do things. None of the people in the world thought to kidnap Sam, despite zero attempts to prevent this, so despite being perhaps the most juicy kidnap target the world has ever known, he remained un-kidnapped. The man had actual zero security, posed zero physical threat, had billions in crypto that was accounted for by literal no one including himself, and was a pure act utilitarian and effectively a causal decision theorist. That person pays all the ransom, and then shrugs it off and gets back to work.

Which is good, given they had no other decision making process whatsoever.

“It is unclear if we even have to have an actual board of directors,” said Sam, “but we get suspicious glances if we don’t have one, so we have something with three people on it.” When he said this to me, right after his Twitter meeting, he admitted he couldn’t recall the names of the other two people. “I knew who they were three months ago,” he said. “It might have changed. The main job requirement is they don’t mind DocuSigning at three a.m. DocuSigning is the main job.” (2,833)

There was no CFO. Why have a CFO? What would they do, keep track of how much money we have?

“There’s a functional religion around the CFO,” said Sam. “I’ll ask them, ‘Why do I need one?’ Some people cannot articulate a single thing the CFO is supposed to do. They’ll say ‘keep track of the money,’ or ‘make projections.’ I’m like, What the fuck do you think I do all day? You think I don’t know how much money we have?” (2,838)

You know what? I do indeed think you did not know how much money you had.

A Few Good Trades

Sam did a lot of trades. Some of them were good trades. Some of them were not.

That means sometimes you look dumb, and sometimes you look like a genius.

When the good ones can pay off by orders of magnitude, every VC and everyone in crypto knows that is a nice place to be.

For example, that Solana trade? Sweet.

Even if it wasn’t [true], Solana’s story was good enough that other people might see it that way and drive up the price of its token. Eighteen months later, Alameda owned roughly 15 percent of all Solana tokens, most purchased at twenty-five cents apiece. The market price of Solana had gone as high as $249, a thousand-times increase on what Sam had paid for the tokens, and the face value of Sam’s entire stash was roughly $12 billion. (2,346)

Makes up for a lot of other trades gone bad, provided you then sell some rather than double down. Yeah, I know. This is Sam we are talking about.

With a lot of effective control over Solana, Sam then was properly motivated to drive more hype and adaption. He even got to create a spin off, a ‘Sam Coin’ called Serum, which was meant to be a claim on a portion of the fees for financial transactions on the Solana blockchain.

This was, presumably, a way to expropriate other holders of Solana. Instead of returning the fees to Solana holders, they would go to Serum holders, so suddenly there was another coin to distribute and manipulate and hype. Fun.

The only problem was that it worked too well.

Soon after Serum’s creation, its price had skyrocketed. Sam clearly had not anticipated this. He now had all these employees who felt ridiculously rich. (At least in theory, the value of Dan Friedberg’s Serum stash peaked, in September 2021, at over $1 billion.)

In Sam’s view, everyone at once became a lot less motivated to work fourteen-hour days. And so he did a very Sam thing: he changed the terms of the employees’ Serum.

In the fine print of the employee Serum contract, he’d reserved for himself the right to extend Serum’s jail time, and he used it to lock up all employees’ Serum for seven years. Sam’s employees had always known that he preferred games in which the rules could change in the middle.

They now understood that if he had changed the rules once, he might do it again. They became less enthusiastic about their Serum. “It was very unclear if you had it or if you didn’t have it,” said Ramnik, who had watched in irritation as Sam locked up a bunch of tokens that he’d bought with his own money on the open market before he joined FTX. “I guess you would know in seven years.” (3,980)

Lewis is so close to getting it. He understands that Sam will betray everyone around him whenever he can. He is altering the deal, pray that he does not alter it any further. Only from Sam’s perspective, there is no deal, there is only reality, which is what you can get away with.

Sam also had the advantage of being Sam and controlling Alameda and FTX.

He also had the bonus of not being so inspired to turn his paper gains into actual dollars (or stable coins, or liquid cryptos like BTC or ETH). Why liquidate what you can borrow against? That way Number Go Up.

Another trade he did was to take advantage of all the wash trading. The wash trading was so ingrained into how business was done, and done so poorly, that when SBF intercepted some of it, Binance’s employees failed to explain to their boss CZ what was even happening. Or, he credibly pretended not to understand.

It was a weird conversation—the CEO of one crypto exchange calling the CFO of another to inform him that, if he didn’t want to lose money on his new futures contract, he’d need to improve his market manipulation. Wei Zhou spoke to CZ, who called Sam for a brief though not unfriendly chat, after which Sam concluded that CZ still had not been told by his traders what had actually happened.

What happened was that Binance was doing its market manipulation via predictable market orders, so SBF would step in front of those orders, which made a bunch of money that came out of Binance’s pocket. Which Binance did not like.

Sam would occasionally consult others on what to do? I guess? Even Lewis realizes Sam does not actually care what anyone else thinks.

After talking to them, Sam could tell himself that he’d checked his judgment without having done so. (2,771)

Sam had invested $500 million in an artificial intelligence start-up called Anthropic, apparently without bouncing the idea off anyone else. “I said to Sam after he did it, ‘We don’t know a fucking thing about this company,’ ” said Ramnik. (2,818)

Putting the $500 million into Anthropic was arguably the most important decision Sam ever made. I do not know if investing in Anthropic was a good or bad move for the chances of everyone not dying, but chances are this was either a massively good or massively bad investment. It dwarfs in impact the rest of his EA activities combined.

Another good trade Sam noticed was that rich people dramatically underinvest in politics, whatever you think of Sam’s execution during what one might generously label his learning phase.

What surprised Sam, once he himself had unlimited sums of money, was how slowly rich people and corporations had adapted to their new political environment. The US government exerted massive influence on virtually everything under the sun and maybe even a few things over it. In a single four-year term, a president, working with Congress, directed roughly $15 trillion in spending. And yet in 2016, the sum total of spending by all candidates on races for the presidency and Congress came to a mere $6.5 billion. “It just seems like there isn’t enough money in politics,” said Sam. “People are underdoing it. The weird thing is that Warren Buffett isn’t giving two billion dollars a year.” (2,874)

We should not forget the original arbitrage trade with South Korea and Japan.

The good arbitrage trade that still doesn’t fully make sense was ModelBot. I see no reason for it not to have worked, but I also see no reason Sam could not have safely proved that it worked by starting small and then scaling up. Why all the drama? Then it stopped working as competition improved.

Even excluding the arbitrage trades, that track record is really good. Sam took a lot of shots, but I think not thousands of such shots. If you can make trades like Solana at $0.25 and early Anthropic, the rest of your trades can lose and you could still have very good alpha - provided you are responsible with your sizing and other risk management, and cut your losses when trades fail and properly consider liquidity issues. There would be no need to lie, or to do all the fraud and crime.

The problem was that Sam was the opposite of responsible with the sizing and risk management. He did not cut his losses when trades failed. He did not consider liquidity issues.

There is also the highly related issue of all the lying and fraud and crime.

The Plan

Behind every great fortune, they say, is a great crime. Certainly that was true for this one. Then, as The Godfather tells us, one needs to appear to go legit.

Sam’s plan was to present FTX as the responsible adults in the room.

It did help that the room was crypto, and filled with crypto exchanges. Many of which were indeed doing all the crimes. From the perspective of the United States, even the ones not doing all the crimes were still doing crimes anyway, the SEC has yet to explain to anyone what it would take to do crypto without doing crimes.

The biggest fish in the pond was CZ and Binance. Oh boy were they doing crimes. Their headquarters is intentionally nowhere. Their internal messages explicitly affirm that they are running an unlicensed security exchange in America. And so on.

Which is why, when Sam took in the situation, he decided that Binance’s strategy was unsustainable. That the smart thing to do was to be the world’s most law-abiding and regulator-loving exchange. FTX could use the law, and the regulators, to drive crypto trading from Binance and onto FTX. If countries did not yet have the laws, a small army of FTX lawyers would help them to create them. (2,406)

Step one was to get CZ and Binance off the cap table, so no one evaluating FTX for its legitimacy would see them on the cap table doing all the crime. So Sam bought him out.

For the stake he’d paid $80 million to acquire, CZ demanded $2.2 billion. Sam agreed to pay it. Just before they signed the deal, CZ insisted, for no particular reason, on an extra $75 million. Sam paid that, too. (2,461)

If SBF was going to pretend FTX was worth that much, why shouldn’t CZ get paid accordingly? However, SBF made a big mistake, and left CZ with $500 million in FTT tokens rather than fully paying out in cash. It really should not have been that hard to not let that happen, given all the money available for spewing elsewhere. Ideally you sell a little of the equity you bought back, and use the proceeds from that.

The next step in reputation washing was a bunch of advertising.

And so when someone from the Miami Heat reached out to them to suggest that FTX buy their naming rights, for $155 million for the next nineteen years, Sam leapt at the chance. That the deal required the approval not just of the NBA but also of the Miami-Dade Board of County Commissioners, a government body, was a bonus. After that, they could point to a government entity that had blessed FTX.

Once their name was on an American stadium, no one turned down their money. They showered money across US pro sports: Shohei Ohtani and Shaquille O’Neal and LeBron James became spokespeople.

They paid Major League Baseball $162.5 million to put the company name on every umpire’s uniform. Having the FTX logo on the umpires’ uniforms, Sam thought, was more useful than having it on the players’ uniforms. In basically every TV shot of every Major League Baseball game, the viewer saw the FTX patch. “The NBA put us through a vetting process,” said FTX lawyer Dan Friedberg. “Major League Baseball just said okay!” (2,482)

It really is that easy. Once the Miami Heat opened the door, no one else asked any questions. Everyone wanted the money, and that was that, FTX on the umpires. Sure, why not?

Given how restrictive FTX US was, this helps explain why SBF was so eager to sponsor all the things. He was after a different goal.

A common theme of FTX’s sponsorships, like much of what FTX did, is that SBF would spew money in spectacular fashion, most of which was wasted, but he’d also have big wins. In this case, the win was Tom Brady.

But everywhere Sam went, people mentioned that they had heard of FTX because of Brady. Hardly anyone mentioned any of the other endorsers. “It was very clear which things had an effect and which did not,” said Sam. “For the life of me, I can’t figure out why this is. I still don’t know how to verbalize it.” (2,505)

No one has ever Not Done the Research more than Sam, who is confused why Tom Brady impacted people more than Brett Farve. I am not confused at all. Tom Brady is the quarterback everyone was already always talking about, the one everyone hated or perhaps loved, the cheater, the one with the rings, the GOAT, the one who got a girl in trouble and left her, and all that. I say quarterback, you say Brady.

Next up was getting into politics. As I noted in the good trades section, Sam noticed this was remarkably cheap. So since he had no time to waste but he definitely had money to waste, he got cracking. Gabe Bankman-Fried, Sam’s brother, got put in charge of the political operation.

Attention is not always people’s strong suit.

I really appreciate this, but it wouldn’t be good for me to take money from FTX, so I can’t—besides, I have found another source of funding.’ That other source of funding was Gabe, my brother.” (2,897)

Sam’s most famous political bet was on Carrick Flynn. The decision to back Flynn comes off in the book, if anything, massively stupider than it looked in real time.

Carrick Flynn’s most important trait, in Sam’s view, was his total command of and commitment to pandemic prevention. His second-most important trait was that he was an effective altruist. (2,927)

Flynn asked some fellow EAs what they thought about him running for Congress. As a political candidate he had obvious weaknesses: in addition to being a Washington insider and a bit of a carpetbagger, he was terrified of public speaking and sensitive to criticism. He described himself as “very introverted.” And yet none of the EAs could see any good reason for him not to go for it—and so he’d thrown his hat into the ring. (2,930)

I don’t blame Flynn, who was trying to do what he thought was the right thing and wasn’t legally allowed to coordinate with SBF’s efforts at all. But it seems so utterly obvious every step of the section on him that this man was never going to be in Congress. Yet they threw tons of money at him anyway, even after that money became the central campaign issue, and all the other candidates ganged up on Flynn over being a crypto stooge and a carpetbagger, and everyone in the district was complaining how their mailboxes were overflowing from campaign ads and they couldn’t take one more of Flynn’s spots on the television.

To be fair, there was real uncertainty the night before, no one knew for sure that it hadn’t worked. And yes, a champion is super valuable?

What did Sam learn?

[Sam] actually didn’t mind all that much. He’d learned a lesson: there were political candidates no amount of money could get elected. (2,959)

Well, yes. Also, you learned that when you stick your neck out like that you and those associated with you (read: EA) pay a lasting reputational cost. Sam did not seem to notice this.

No time to lose. Sam was off to meet Mitch McConnell, with everyone scrambling to get SBF into a presentable suit (he had been convinced to technically bring a suit, but had given no thought to its presentability, he let others handle such things) and he worked to ensure not to call Mitch, who insisted on being called Leader, ‘dear leader’ instead. Which I admit sounds hard.

Also, check out the claim at the end here.

At that moment, Sam was planning to give $15–$30 million to McConnell to defeat the Trumpier candidates in the US Senate races. On a separate front, he explained to me, as the plane descended into Washington, DC, he was exploring the legality of paying Donald Trump himself not to run for president. His team had somehow created a back channel into the Trump operation and returned with the not terribly earth-shattering news that Donald Trump might indeed have his price: $5 billion. Or so Sam was told by his team. (2,972)

A $10 million donation to a McConnell dark money group One Nation has been confirmed.

I once again remind everyone that, while the price has likely gone up, the offer is probably still on the table if someone is bold enough to take it. Sure, now that he’s got the nomination in his sights it probably costs you $10 billion, but given the way people talk about what a Trump victory would look like, surely that is a small price to pay?

Meanwhile, Sam claimed to have infiltrated Trump’s team, and I love what they did with the place.

Sam’s team had come up with an idea—which, Sam claimed, was just then making its way to Trump himself. The idea was to persuade Trump to come out and say “I’m for Eric!” without specifying which Eric he was for. After all, Trump didn’t actually care who won. (2,979)

Trump actually did it, and it is plausible this switched which Eric won. Good show.

The Tragedy of FTT

The whole FTT situation still blows my mind.

I mean, I know it happened, I accept it. Still. Blows my mind every time.

The tragedy is there was absolutely no need for any of it. There was no need to keep flipping coins double-or-nothing for all the money on the assumption that the odds were in Sam’s favor.

Which they weren’t. Yet he kept flipping.

So here’s the basics, for those who don’t know.

Alameda owned a lot of FTT, which is effectively stock in FTX.

This FTT was highly illiquid. Trying to sell even a fraction of it would have collapsed the price, as everyone involved knew. Collapsing the price of FTT would then, as again everyone involved knew, cause a collapse of confidence in FTX, causing a run on the bank that was FTX. Which those involved had information to know would be quite a serious problem, were it to happen.

Every trader knows that you do not borrow heavily against your own illiquid stock, with loans recallable at any time and likely to be recalled when times get tough for your industry, to buy other illiquid and highly speculative things highly correlated to your stock and your industry.

Especially if you know you could not survive the resulting bank run because you’ve appropriated billions in customer funds to cover your other losses or even to keep making more illiquid investments, or to spend on random stuff. All while your exchange was a highly valued money printing machine that could easily raise equity capital.

And if you were still for some reason going to do that, they would at least know not to also give your biggest rival a huge chunk of that same illiquid token sufficient to crash the market, then actively try to drive him out of his home and bring regulators down on his head, in ways he can see you doing right there.

I mean, come on, that’s completely insane.

Except that is, by the book’s admission, exactly what happened.

It seemed perfectly natural for Alameda to control all the remaining FTT, and use it as collateral in its trading activity. Sam didn’t even try to hide what he was doing. (2,105)

Did other crypto firms accept this collateral, knowing or even worse somehow not knowing exactly what this implied? Why yes. Yes they did.

This created a highly volatile situation. A downward spiral waiting to happen.

Then crypto crashed, everyone including Alameda lost a lot of money, and it happened.

To try and prevent it from happening, Alameda had to actually repay its loans, or else the FTT it used as collateral was going to get liquidated. Then it had to bail out firms like Voyager. This was all on top of all the money Alameda and FTX had already spent and lost.

Sam still did not seem to notice that funding might be an urgent issue.

At their peak, they’d together been valued at roughly $7 billion. Now Ramnik was acquiring them for no more than $200 million. A pittance. Or so it seemed. Ramnik recently had asked Sam how much capital he should assume was available for possible acquisitions, and Sam had said, Just let me know if you get to a billion. (3,100)

So Sam kept poking the bear. Hence, The Vanishing.

The Vanishing

That’s what Michael Lewis calls the collapse of FTX.

The proximate cause was that Sam pissed off CZ, while very much not being in a position to call BS on anyone. As in doing things like this:

Sam later wrote up the message he’d tried to convey to them. “I love Dubai,” he said. But we can’t be in the same place as Binance. . . . This is for two reasons: first they are constantly devoting significant company resources to trying to hurt us; and second that they soil the reputation of wherever they are. I can’t emphasize this enough: in general I hear great things from other jurisdictions/regulators etc. about Dubai and the UAE [United Arab Emirates], except that there’s a constant refrain of: that’s the jurisdiction that accepted Binance, and so we don’t trust their standards. It was unclear to Sam, if Dubai decided to rid itself of CZ and his exchange, whether any country in which CZ would be willing to live would accept them. In these woods, CZ was the biggest bear and Sam seemed to be going out of his way to poke him. (3,154)

CZ was understandably upset, leaked a supposed balance sheet from Alameda that looked bad but not as bad as the full reality, and announced his intention to dump his FTT.

Caroline Ellison decided to respond by offering to buy all the FTT at $22, thinking this was a show of strength, except for once crypto investors understood what that meant and acted accordingly.

Within twenty seconds of Caroline’s tweet came a rush to sell FTT by speculators who had borrowed money to buy it. The panic was driven by an assumption: if Alameda Research, the single biggest owner of FTT, was making a big show of being willing to buy a huge pile of it for $22, they must need for some reason to maintain the market price at 22. The most plausible explanation was that Alameda Research was using FTT as collateral to borrow dollars or bitcoin from others. “You don’t tell someone a price level like $22 unless you have a lot of confidence that you need that price,” the CEO of Gauntlet, Tarun Chitra, told Bloomberg News. By Monday night, the price of FTT had fallen from $22 to $7. (3,203)

Then the run on FTX began in earnest. Which would not have been a problem…

Ramnik could see that money was leaving FTX, but he didn’t view it as a big deal. The customers might panic and pull out all their money. But once they realized that there was nothing to panic about, they’d return, and their money would too. (3,221)

…except that FTX did not have the money to pay their customers, because Alameda had taken it and did not have the ability to give it all back.

Or have much idea how much they even had.

Though Caroline was in charge of Alameda Research, she seemed totally clueless about where its money was. She’d come onto the screen and announce that she had found $200 million here, or $400 million there, as if she’d just made an original scientific discovery. Some guy at Deltec, their bank in the Bahamas, messaged Ramnik to say, Oh, by the way, you have $300 million with… And it came as a total surprise to all of them!

That he’d been taken by surprise. He wondered: If these people knew there was a risk that they might not have enough money, why hadn’t they even bothered to figure out how much they had? They’d done nothing. (3,244)

Didn’t see it coming, I suppose. Did decide to use $8 billion in customer funds as if it was Alameda operating capital. Did not anticipate that the customers might ask for that money back all at once when they found out what was going on. Whoops.

Lewis rightly points out, near the end, that while many people did realize FTX was obviously up to no good, no one actually managed to figure out the exact no good they were up to until rather late in the game.

Even those who had expressed suspicion about Sam or FTX had failed to say the one simple thing you would say if you knew the secret they were hiding: the customers’ deposits that are supposed to be inside FTX are actually inside Alameda Research. (4,007)

They also couldn’t imagine that things could have been as chaotic and unaccounted for, or as blatant, as they were. It wasn’t necessary for the no good to be that no good. The borrowing against FTT tokens was bad enough on its own.

A lot of people, as FTX started to collapse, did the same calculation I did. It was quickly clear, as Sam went on Twitter to put on his best dog-drinking-coffee face and say ‘assets are fine,’ that there were only two possible worlds.

Either things really were fine, because FTX was obviously a money machine, things not being fine would have meant a completely crazy level of recklessness and incompetence, and SBF had gone way over the ‘this is fraud if things are not fine’ line and was very all-in. No one would be so stupid as to.

Or this was pure fraud, through and through, and FTX and SBF did all the crime.

That’s why me and so many others turned around on a dime - once we could rule out scenario #1, we knew we were in scenario #2.

One by one, people who wanted it to be one way got the piece of evidence that convinced them it was the other way.

Zane pinged Sam and asked, ‘Should I do damage control?’ ‘Yup,’ he said.” Zane then sent Sam a message asking three questions: “One, are we insolvent, two, did we ever lend out customer funds to Alameda, and three anything I didn’t ask that I need to know?” Sam didn’t reply—and then went totally silent on him. (3,344)

Still, Zane figured there was no way that FTX was in real trouble. It made no sense. The price of FTT shouldn’t have any effect on the value of the exchange, any more than the price of Apple stock should have on Apple’s iPhone sales. Just the reverse: the exchange’s revenues drove the value of FTT. “If FTT goes to zero, so what?” said Zane. The other reason it made no sense was that FTX had been so wildly profitable. “I know how much real revenue we were making: two bips [0.02 percent] on two hundred fifty billion dollars a month,” said Zane. “I’m like, Dude, you were sitting on a fucking printing press: why did you need to do this?” (3,347)

Here is happens to Constance, on seeing the ‘balance sheet.’

The next document in her stack was a rough balance sheet of Alameda Research that differed in important ways from the rough balance sheet that had inspired the CoinDesk article now being credited with bringing down the entire business. It appeared to Constance that it had been hastily concocted either by Sam or Caroline, or maybe by both. Constance had first come across it the previous Tuesday, after FTX had ceased sending money back to its customers. “When I saw it, I told my team not to respond to external parties because I did not want them to lose their good name and reputation,” she said.

The list of assets included the details of hundreds of private investments Sam had made over the previous two years, apparently totaling $4,717,030,200. The liabilities now had a line item more important than everything else combined: $10,152,068,800 of customer deposits. More than $10 billion that was meant to be custodied by FTX somehow had ended up inside Sam’s private trading fund. The document listed only $3 billion in liquid assets—that is, US dollars or crypto that could be sold immediately for dollars.

“I was like, Holy shit,” she said. “The question is: Why?” It was the same question Zane had asked. “We had so profitable a business,” said Constance. “Our profit margin was forty to fifty percent. We made four hundred million dollars last year.” (3,478)

They may have made five hundred million, but even if they hadn’t stolen everyone’s money, that was not about to pay the expenses.

Constance herself had lost around $25 million. She still had $80,000 in an ordinary bank account she’d kept from her previous life, but otherwise she’d lost everything. (3,492)

This is the kind of thing that still blows my mind. You have stock in FTX, you have $25 million in liquid assets, the world is in front of you. And you chase FTX’s interest payments, and trust FTX so much, that you keep all your money on the exchange. What? That is completely crazy behavior. And yet, most employees tell exactly that story. It seems likely SBF/FTX insisted upon it, and Lewis either missed this or declined to mention it.

Because at $25 million while working at a crypto company, I’d hope I’d be doing things like millions in gold in a secret vault. At minimum I’d have $5 million in an offshore bank account.

But even after that, Constance didn’t turn on Sam yet. She only turned on Sam when she realized that, compared to those around her, she’d been given an order of magnitude or two less stock than she should have gotten.

That’s when Constance’s feeling about Sam changed: when she saw how she’d actually been treated. (3,524)

Only then did she decide to spend the last chapter of the book helping Sam with logistics so she could try to get Sam to confess.

The Reckoning

As future prisoners, having been caught doing all the crime, the principles of FTX faced the prisoner’s dilemma.

The game theory of SBF: You have to commit to the bit.

That night, Nishad requested a meeting with just Gary and Sam. Once the three were alone in a room, Nishad asked, What happens if law enforcement or regulators reach out? What do you mean? Sam asked. How do we make sure we cooperate in prisoner’s dilemma? How do we all make sure we say the other ones are innocent? I don’t have any reason to think any one of us had criminal intent, said Sam. (3,291)

No, said Nishad. That’s not good enough. You need to talk to them. You need to tell them I had no clue. How could I know that? asked Sam. You are saying that I should say that you know nothing about something I know nothing about. How is that even possible? It makes no sense. But I didn’t know, said Nishad. Then say that, said Sam. It’s not going to work for me, said Nishad. Because there is code-based evidence of what I did. (3,295)

The game theory of everyone else? Not so much.

Caroline held this meeting on November 9 to explain the situation to her employees. as Patrick McKenzie says ‘what a document.’

Being Causal Decision Theory agents, and being somewhat more grounded in reality, the rest of the EAs all turned state’s witness.

Sam also had many other bizarre ideas about how any of this worked.

“At the end of the day, the deciding factor in the jurisdictional dispute is Gary,” said Sam, the night Zane left, “because he’s the only one who knows how to use a computer.” (3,374)

Sam was convinced to declare bankruptcy in America lest he instead have it declared for him by less friendly other parties, then tried to undo it which you cannot do, then went around insisting that if he hadn’t declared bankruptcy it would all have worked out.

Sam kept trying to explain how the money was all there, really, or close to it, and how all of this was merely a serious of unfortunate ‘fuck ups’ and misunderstandings. Sam thought he had plenty of money, didn’t keep track of things properly, everything seemed safe at the time, he was as surprised as anyone.

So far, so public domain. The weird thing is that Michael Lewis seems to buy it.

In Sam’s telling, FTX had switched off Alameda’s risk limits to make itself more appealing. The losses caused by this unsettling policy were in any case trivial. Ordinary trading loans made by FTX to Alameda constituted a small fraction of the losses to customers; on their own, they wouldn’t have posed a problem. The bulk of the customers’ money inside of Alameda that should have been inside FTX—$8.8 billion of it, to be exact—resided in an account that Alameda had labeled fiat@. The fiat@ account had been set up in 2019 to receive the dollars and other fiat currencies sent by FTX’s new customers. (3,543)

In Sam’s telling, the dollars sent in by customers that had accumulated inside of Alameda Research had simply never been moved. Until July 2021, there was no other place to put them, as FTX had no US dollar bank accounts. They’d been listed on a dashboard of FTX’s customer deposits but remained inside Alameda’s bank accounts. Sam also claimed that, right up until at least June 2022, this fact, which others now found so shocking, hadn’t attracted his attention. (3,555)

But even if you valued the contents of Alameda more rigorously, as Sam sort of did in his head sometimes, you could still easily get to $30 billion. The $8.8 billion that should not have been inside Alameda Research was not exactly a rounding error. But it was, possibly, not enough to worry about. As Sam put it: “I didn’t ask, like, ‘How many dollars do we have?’ It felt to us that Alameda had infinity dollars.” (3,562)

At that point, in Sam’s telling, Sam thought that Alameda might be in trouble. He decided to dig into its accounts on his own and understand the problem. By October, he had a clearer picture. It was only then that he could see that Alameda had been operating as if the $8.8 billion in customer funds belonged to it. And by then it was too late to do anything about it. (3,576)

This story does not actually make any sense, and of course is directly and blatantly contradicted by testimony at the trial. And yet Michael Lewis is intrigued:

I had a different question. It preoccupied me from the moment of the collapse: Where had the money gone? It was not obvious what had happened to it. (3,610)

At any rate, when I was done, my extremely naive money-in, money-out statement looked like this:

MONEY IN:

Net customer deposits: $15 billion

Investments from venture capitalists: $2.3 billion

Alameda trading profits: $2.5 billion

FTX exchange revenues: $2 billion

Net outstanding loans from crypto lenders (mainly Genesis and BlockFi): $1.5 billion

Original sale of FTT: $35 million

Total: $23,335,000,000

MONEY OUT:

Returned to customers during the November run: $5 billion

Amount paid out to CZ: $1.4 billion (Just the hard cash part of the payment. I’m ignoring the $500 million worth of FTT Sam also paid him, as Sam minted those for free. I’m also ignoring the $80 million worth of BNB tokens that CZ had used to pay for his original stake, worth $400 million at the time Sam returned them as part of his buyout of CZ’s interest.)

Sam’s private investments: $4.4 billion (The whole portfolio was $4.7 billion, but at least one investment, valued at $300 million, Sam had paid for with shares in FTX. He likely did the same with others, and so this number is likely bigger than it actually was.)

Loans to Sam: $1 billion (Used for political and EA donations. After his lawyers explained to him that taking out loans was smarter than paying himself a stock dividend, as he’d need to pay tax on the dividends.)

Loans to Nishad for same: $543 million

Endorsement deals: $500 million (This is likely generous too, as in some cases—Tom Brady was one of them—FTX paid its endorsers with FTX stock and not dollars.) Buying and burning their exchange token,

FTT: $600 million

Corporate expenses (salaries, lunch, Bahamas real estate): $1 billion

Total: $14,443,000,000 (3,626)

The case has been made to me that this accounting is not as naive and stupid as it looks. I continue to mostly disagree with that. Lewis continues to double down.

There were some likely explanations for the missing money. The more you thought about them, however, the less persuasive they became. For example, Alameda traders might have gambled away $6 billion. But if they had, why did they all believe themselves to be so profitable, right to the end? I’d spoken to a bunch of them. Several were former Jane Streeters. They weren’t stupid. (3,462)

This is perhaps the most ‘naive guy’ thing in the entire book. Smart people can’t think they are making good trades when they are making bad ones and losing tons of money, right? And they wouldn’t lie to Michael Lewis about profitability, right?

The most hand-wavy story just then being bandied about was that the collapse in crypto prices somehow sucked all the money out of Sam’s World. And it was true that Sam’s massive holdings of Solana and FTT—and other tokens of even more dubious value—had crashed. They’d gone from being theoretically worth $100 billion at the end of 2021 to being worth practically zero in November 2022.

But Sam had paid next to nothing for these tokens; they had always been more like found money than an investment he’d forked over actual dollars to acquire. He’d minted FTT himself, for free. (3,647)

At their peak, Alameda was on (some form of electronic) paper worth $100 billion or so. We know that Alameda’s edge in algorithmic trades had likely been going away as they faced stiffer competition, that source of profit was likely gone, yet they continued to borrow. What was the profitable trading Alameda was doing with all that capital?

They were getting long. Alameda was borrowing a bunch of capital from various lenders, and using it to get long and then get longer. That is where the money was going.

Then Number Went Down. Money gone.

Does Michael Lewis think the people at Three Arrows Capital or Voyager were stupid? The people who created and ran Luna? Enron? Lehman Brothers? Does this man not remember his own books?

He said it himself. Sam’s entire empire was a leveraged - Lewis’s word - bet on the success of crypto and the empire itself more generally. When you have no ethics only a quest for Number Go Up (you know, for the common good) and therefore don’t care that, sure, technically that was customer deposits right there, that leverages your bet all the more. As Number Go Down, rather than hedge, they doubled down, including via providing bailouts.

Leverage plus Number Go Down equals Broke Fi Broke, overwhelming other sources of profits.

Yes, a lot of their horde of stuff they bought for pennies. But they then used that as collateral to borrow money and put more things into the horde. All of which was correlated, and all of which was down. A lot.

Also, SBF was shoving money out the door in any number of other ways that the above numbers are missing, money was constantly being misplaced or stolen, a fire sale is not a cheap thing to partake in, and so on. So I do not think there is any mystery here.

We then get a fascinating story. San says combined losses from things like this were only $1 billion, but honestly how would he even know given everything.

But on that evening, Sam filled in one piece of this particular puzzle: FTX had lost a lot of money to hackers. To avoid encouraging other hackers, they’d kept their losses quiet. The biggest hacks occurred in March and April 2021. A lone trader had opened an account on FTX and cornered the market in two thinly traded tokens, BitMax and MobileCoin.

His purchases drove up the prices of the two tokens wildly: the price of MobileCoin went from $2.50 to $54 in just a few weeks. This trader, who appeared to be operating from Turkey, had done what he had done not out of some special love for MobileCoin. He’d found a flaw in FTX’s risk management software. FTX allowed traders to borrow bitcoin and other easily sellable crypto against the value of their MobileCoin and BitMax holdings.

The trader had inflated the value of MobileCoin and BitMax so that he might borrow actually valuable crypto against them from FTX. Once he had it he vanished, leaving FTX with a collapsing pile of tokens and a loss of $600 million worth of crypto.

The size of those hacks was an exception, Sam said. All losses due to theft combined had come to just a bit more than $1 billion. In all cases, Gary had quietly fixed the problem and they’d all moved on and allowed the thieves to keep their loot. “People playing the game,” was Sam’s description of them. (He really was easy to steal from.) (3,701)

That is not a hack. He did not steal the money. You gave it to him.

I know that people call such things hacks, like the ‘hack’ about going both ways using leverage earlier. Instead, Sam is right here. This is people playing the game. If your risk engine is stupid enough to let me use my MOBL at $54 to borrow and withdraw a bunch of actual BTC, treating the value of MOBL as real, then that is on the risk engine, whether or not there was also market manipulation involved. I felt the same way about Avi and the Mango trade - yes sure it is illegal and no one is crying for him when he gets arrested nor should they, but also suck it up and write better code, everyone, as is the crypto way.

FTX’s risk engine was by all accounts excellent, when dealing with coins that were liquid relative to the position sizes involved, and when the risk engine was set to on. FTX’s risk engine was sometimes turned off, and the risk engine clearly did not make reasonable adjustments for illiquid or obviously bubble-shaped coins.

This also was rather a big deal - FTX lost, by Sam’s own account, a full year’s profits. And that’s the official Sam story. The real story is inevitably much worse. I do not for a second buy that they only lost $1 billion total in hacks.

John Ray, the Hero We Need

John Ray is pretty great. He’s the guy who cleaned up the Enron mess, the guy you call when you have a world-class mess, and he’s the one they called in for FTX.

Suddenly there is a no-nonsense adult in the room who is having none of it, even when there is some of it worth having.

Michael Lewis tries his best to throw shade at him, but Lewis is too honest - too much a naive guy - for any of it to stick even a little.

As a legal matter, at 4:30 in the morning on Friday, November 11, 2022, Sam Bankman-Fried DocuSigned FTX into bankruptcy and named John Ray as FTX’s new CEO. As a practical matter, Sullivan & Cromwell lined up John Ray to replace Sam as the CEO of FTX, and then John Ray hired Sullivan & Cromwell as the lawyers for the massive bankruptcy. (3,772)

While Sam stewed, John Ray read up on him and this company he’d created. “It’s like, What is this thing?” said Ray. “Now it’s just a failure, but it was once some kind of business. What did you guys do? What’s the situation? Why’s this falling into bankruptcy so quickly?” He briefly considered the possibility that the failure was innocent: maybe they got hacked. “Then you start looking at the kid,” said Ray, the kid being Sam. “I looked at his picture and thought, There’s something wrong going on with him.”

Ray prided himself on his snap judgments. He could look at a person and in ten minutes know who they were, and never need to reconsider his opinion. The men he evaluated he tended to place in one of three bins in his mind: “good guy,” “naive guy,” and “crook.” Sam very obviously was not a good guy. And he sure didn’t seem naive. (3,783)

That’s a great skill if you are consistently correct. Based on the evidence presented, John Ray is almost never wrong about what type of guy he is dealing with.

He’d spoken only long enough with the other members of Sam’s inner circle to see them for what they were. Nishad Singh struck him as a naive guy. “He’s narrow,” said Ray. “It’s tech, tech, tech. There’s never a problem he can’t solve. He’s not going to steal money. He’s not going to do anything wrong. But he has no idea what’s going on around him. You ask him for a steak and he puts his head up the bull’s ass.”

The bankruptcy team had located Caroline Ellison by phone on the Saturday after Ray became FTX’s new CEO. She at least had been able to explain where some of the wallets storing the crypto were stashed. Other than that, she wasn’t much use. “She’s cold as ice,” said Ray. “You had to buy words by the vowel. An obvious complete fucking weirdo.” (3,797)

Nishad being a naive guy seems right to me, based on the rest of the book. He had more than enough information to know what was happening, but the Arc Words of the whole book are that people don’t see what they don’t look for, so there you go.

“There’s people that are born criminals, and there’re people that become criminals,” said Ray. “I think [Sam] became a criminal. The how and why he became a criminal I don’t know. I think maybe it takes an understanding of this kid and his parents.” (3,808)

John Ray is here to let you know that you are suffering, and pitying, too many fools.

Six days into his new job, Ray filed a report with the US Bankruptcy Court for the District of Delaware. “Never in my career have I seen such a complete failure of corporate controls and such a complete absence of trustworthy financial information as occurred here,” he wrote. Instead of grilling the people who had created the mess, Ray hired teams of hard-nosed sleuths—many of whom he’d worked with before. “Serious adults,” as he called them. The Nardello firm was a lot of former FBI guys. (Corporate motto: We find out.) (3,822)

First he got things under some semblance of order. He then moved on to looking for all the money.

That was in early 2023. By late April, John Ray’s head was on a swivel. “This is live-action,” he said. “There’s always something every hour.” One day, some random crypto exchange got in touch and said, By the way, we have $170 million in an account of yours: do you want it back? Another day, some random FTX employee called them out of the blue to say that he’d borrowed two million bucks from the company and wanted to repay the loan—of which, so far as Ray could see, there was no record. Of course, once you heard about one loan, you had to wonder how many others like it you’d never hear about. (3,836)

Several months into the hunt, Ray’s sleuths had discovered that “someone had robbed the exchange of four hundred fifty million.” They’d stumbled upon not the simple hack of November 2022 but the complicated BitMax and MobileCoin hacks of $600 million in the spring of 2021. (The dollar value changed with fluctuations in the price of the stolen crypto.) They’d tracked the hacker not to Turkey but Mauritius. “We have a picture of him going in and out of his house,” said Ray. He was pretty sure he was going to get most of that money back. “We believe there are a lot more of these,” said Ray. (3,844)

Lewis portrays Ray in all this as an archeologist, shifting through the ruins for cash and clues. Michael Lewis makes a point of all the money Ray and his team were going to bill FTX for the work they did. I look at what they had to deal with and how much money they ultimately rounded up, and I say they earned every penny. Part of earning that is that when you are Ray, you cannot rely upon or trust anyone who made the mess in the first place. That’s hostile information sources. If you want it done right, and you do, you have to figure it all out for yourself.

The best thing about Ray is his reaction to Lewis, as Lewis keeps trying to explain all the things he think he knows, and Ray keeps ignoring him, and it’s going to be some of the straight up funniest scenes in the movie.

I demand that Ray be played by John Goodman, it would be so perfect.

At some point his team discovered that a Hong Kong subsidiary of Alameda Research called Cottonwood Grove had bought vast sums of FTT, for example. To the innocent archaeologist, it was evidence of Sam’s World artificially propping up the value of FTT. Ray didn’t know that FTX had been obligated to spend roughly a third of its revenues buying back and burning its token, and that Cottonwood Grove was the entity that did it. From my perch on the side of the dig, I would occasionally shout down to the guy running it my guess about the most recent find, but he’d just look up at me, pityingly. I was clearly a naive guy. (3,875)

Yes. Very clearly.

We have a fun clip of Ray spelling the uselessness of Lewis to him out for us.

The thing about Ray is that, in order to be so good at his job, he needs to have zero tolerance for pretty much anything. So when something actually is real, he can miss it.

The hundreds of private investments made by Alameda Research, for instance. When we first met, in early 2023, Ray went on about how fishy these all were. He had a theory about why Sam had thrown money around the way he had: Sam was buying himself some friends. “For the first time in his life, everyone ignores the fact that he’s a fucking weirdo,” said Ray. As an example, he cited the dollars Sam had invested in artificial intelligence companies. “He gave five hundred million bucks to this thing called Anthropic,” said Ray. “It’s just a bunch of people with an idea. Nothing.” (3,886)

Lewis would say this theory is ridiculous, and on its face it definitely is, everyone wanted to be Sam’s friend, but also how much got invested into OpenAI and Anthropic in the name of access? As in, friendship?

What Ray cannot see is that Anthropic was obviously a very good financial investment, because he does not know anything about AI. He certainly does not want to hear anything about existential risk, or whether Anthropic is helping or not helping with that concern.

A key question was, what crypto was worth anything, and what wasn’t? For some reason Ray locked onto Serum, the offshoot of Solana.

And yet now, somehow, in John Ray’s book, the locked Serum was good shit. Primo crypto of the finest vintage imbibed by all gentlemen of good taste. And who knows?—maybe one day it will be. But if Serum was a token to be taken seriously, Sam Bankman-Fried and the world he created needed to be viewed in a different light. At Serum’s peak price, the stated market value of Sam’s stash of it was $67 billion. On November 7, 2022, Sam’s pile of mostly locked Serum was still “worth” billions of dollars. If even locked Serum had that kind of value, FTX was solvent right up to the moment it collapsed. And John Ray would have no grounds for clawing back money from any of the many lucky people on whom Sam Bankman-Fried had showered it. (3,988)

In case you didn’t know, well, not so much, here’s Serum.

Ray kept searching, Ray kept finding.

That would raise the amount collected to $9.3 billion—even before anyone asked CZ for the $2.275 billion he’d taken out of FTX. Ray was inching toward an answer to the question I’d been asking from the day of the collapse: Where did all that money go? The answer was: nowhere. It was still there. (4,000)

Caroline Ellison

Sam’s on-again, off-again, very-bad-idea relationship with Caroline Ellison is a key part of the story, because Caroline ended up effectively in charge of Alameda when the worst of the fraud went down. It does not seem like a coincidence that Caroline ended up in charge of Alameda, despite her not seeming like someone who should be given that kind of responsibility, as per (among other signs) her repeated observations that she was not up to the job.

Also she did not have an ideal attitude with respect to willingness to do various crimes, where the whole thing made her deeply uncomfortable but she still did the crimes anyway - you want someone who does not do crimes, or contains their crimes to contextually ‘ordinary decent crimes’ rather than outright frauds like stealing customer funds. Or if you have decided that your plan is to do a lot of crimes, a plan I recommend strongly against, you want someone who is fine with doing lots of crime.

In any case, Caroline it seems exchanged long emails spelling out the arguments for exactly how obviously she and Sam should not have been dating, with Sam offering points like this:

[Sam] began with a seriously compelling list, titled: ARGUMENTS AGAINST:

In a lot of ways I don’t really have a soul. This is a lot more obvious in some contexts than others. But in the end there’s a pretty decent argument that my empathy is fake, my feelings are fake, my facial reactions are fake. I don’t feel happiness. What’s the point in dating someone who you physically can’t make happy? I have a long history of getting bored and claustrophobic. This has the makings of a time when I’m less worried about it than normal; but the baseline prior might be high enough that nothing else matters. I feel conflicted about what I want. Sometimes I really want to be with you. Sometimes I want to stay at work for 60 hours straight and not think about anything else. I’m worried about power dynamics between us. This could destroy Alameda if it goes really poorly PR-wise. This combos really badly with the current EA shitshow I’m supposed to be, in some ways, adjudicating. I make people sad. Even people who I inspire, I don’t really make happy. And people who I date—it’s really harrowing.

It really fucking sucks, to be with someone who (a) you can’t make happy, (b) doesn’t really respect anyone else, (c) constantly thinking really offensive things, (d) doesn’t have time for you, and (e) wants to be alone half the time. There are a lot of really fucked up things about dating an employee.

This list was followed by another, briefer list, titled “ARGUMENTS IN FAVOR.” I really fucking like you. I really like talking to you. I feel a lot less worried about saying what’s on my mind to you than to almost anyone else. You share my most important interests. You’re a good person. I really like fucking you. You’re smart and impressive. You have good judgement and aren’t full of shit. You appreciate a lot of me for who I am. (2,126)

While I admit those are actually pretty strong arguments in favor, and in other circumstances would be very good reasons to date someone, the arguments against seem rather conclusive.

Caroline wanted a conventional love with an unconventional man. Sam wanted to do whatever at any given moment offered the highest expected value, and his estimate of her expected value seemed to peak right before they had sex and plummet immediately after. (2,155)

This is exactly what one would expect from the rest of Sam’s behavior in other contexts. Story checks out.

That I guess brings us to the psychiatrist? Who according to other reports had the entire firm including Sam hopped up on various pills in ways the book declines to mention?

It didn’t take a psychiatrist to see a pattern in Sam’s relationship with Caroline, but there happened to be one sitting in the middle of it. His name was George Lerner, and by late 2021 he might have been the world’s leading authority on the inner life of effective altruists. (2,208)

I know of at least two psychiatrists who were and are better experts on this than George Lerner. For example, have you met… Scott Alexander? Anyway.

Then the effective altruists started showing up—and when they did, George took a new and keener interest in his patients. Gabe Bankman-Fried, Sam’s younger brother, was the first, but hard on his heels came Caroline Ellison and others from Alameda Research. By the time Sam arrived, a year later, George was treating maybe twenty EAs. As a group, they eased a worry George had about himself: the limits to his powers of empathy. When ordinary people came to him with their ordinary feelings, he often found himself faking an understanding. The EAs didn’t need his empathy; the EAs thought that even they shouldn’t care about their feelings. In their single-minded quest to maximize the utility of their lives, they were seeking to minimize the effect of their feelings. In their single-minded quest to maximize the utility of their lives, they were seeking to minimize the effect of their feelings. “The way they put it to me is that their emotions are getting in the way of their ability to reduce their decisions to just numbers.” (2,238)

They were all completely and utterly sincere. They judged the morality of any action by its consequences and were living their life to maximize those consequences. (2,249)

You can’t do that. I mean, obviously you literally can, but professionally no, you really, really can’t treat Gabe and his brother Sam and his girlfriend and employee Caroline and everyone else in their entire social network. This is Dr. Nick territory. Then again, given what everyone involved wanted, maybe you can? It’s not like they wanted anything from him except drugs and practical advice understanding other people. Maybe there is no conflict of interest here after all.

Also, who cares, given George didn’t even have a license in the Bahamas in the first place?

The Bahamas hadn’t granted George a medical license. His title was Senior Professional Coach. (2,633)

Perhaps he was taking inspiration from the (nominal) psychiatrist in Billions?

New and Old EA Cause Areas

In addition to his EA causes, SBF did have his own cause area, which was physical beauty.

He was against it.

[Anna Wintour of Vogue] looked like a million bucks, but her art, like all art, was wasted on Sam. (235)

“You start by making decisions on who you are going to be with based on how they look,” he said. “Then, because of that, you make bad choices about religion and food and everything else. Then you are just rolling the dice on who you are going to be.”

Anna Wintour, now that he thought of it, represented much of what he disliked about human beings. “There are very few businesses that I have strong moral objections to, and hers is one of them,” he said. “I actually have disdain for fashion. I have general disdain for the importance that physical attractiveness has, and this is one thing emanating out of that.” (348)

He also investigated but dismissed as a child the cause area of hell.

He found his way to a solution that offered temporary relief: only children suffered from this madness. Yes, kids believed in Santa. But grown-ups did not. There was a limit to the insanity. But then, a year or so later, a boy in his class said he believed in God.

“And I freaked out,” recalled Sam. “Then he freaked out. We both freaked out. I remember thinking, Wait a minute, do you think I’m going to hell? Because that seems like a big deal. If hell exists, why do you, like, care about McDonald’s? Why are we talking about any of this shit, if there is a hell. If it really exists. It’s fucking terrifying, hell.”

From the widespread belief in God, and Santa, Sam drew a conclusion: it was possible for almost everyone to be self-evidently wrong about something. “Mass delusions are a property of the world, as it turns out,” he said.

According to the company psychiatrist, the EAs really did only care about suffering.

“It doesn’t really start with people,” said George. “It starts with suffering. It’s about preventing suffering.” (2,257)

This attitude drives me bonkers. Yes, suffering is bad. It is the way we indicate to ourselves that things are bad. It sucks. Preventing it is a good idea. But when you think that suffering is the thing that matters, you confuse the map for the territory, the measure for the man, the math with reality. Combine that with all the other EA beliefs, set this as a maximalist goal, and you get… well, among other things, you get FTX. Also you get people worried about wild animal or electron suffering and who need hacks put in to not actively want to wipe out humanity.

If you do not love life, and you do not love people, or anything or anyone within the world, and instead wholly rely on a proxy metric? If you do not have Something to Protect? Oh no.

I mean, listen to yourselves, as George is describing you:

“A lot of EAs chose not to have kids,” said George. “It’s because of the impact on their own lives. They believe that having kids takes away from their ability to have impact on the world.” After all, in the time it took to raise a child to become an effective altruist, you could persuade some unknowably large number of people who were not your children to become effective altruists. “It feels selfish to have a kid. The EA argument for having a kid is that kid equals happiness and happiness equals increased productivity. If they can get there in their head, then maybe they have a kid.” (2,261)

“There are two parts of being EA,” said George. “Part one is the focus on consequences. Part two is the personal sacrifice.” (2,267)

That is saying, my own child’s only value would be if they too become an effective altruist, or if they increase my altruistic productivity. This is not an attitude compatible with life. If this is you, please halt, catch fire and seek help immediately.

This last point does not ring true, EAs totally complain about lack of dating opportunities, although I can totally buy that everyone else thought the EAs thought they were smarter than everyone else - and in context, that they were technically right.

“Everyone is complaining about the lack of dating opportunities,” said George. “Except the EAs. The EAs didn’t care.”

The non-EAs thought the EAs thought they were smarter than everybody else. (2,628)

As much as I criticize EAs, I do it because they are worthy of criticism. They aspire to do better. Otherwise I wouldn’t waste my time. And when Lewis goes too far and misses the mark, there’s big ‘no one picks on my brother but me’ energy.

One day some historian of effective altruism will marvel at how easily it transformed itself. It turned its back on living people without bloodshed or even, really, much shouting. You might think that people who had sacrificed fame and fortune to save poor children in Africa would rebel at the idea of moving on from poor children in Africa to future children in another galaxy. They didn’t, not really—which tells you something about the role of ordinary human feeling in the movement. It didn’t matter. What mattered was the math. Effective altruism never got its emotional charge from the places that charged ordinary philanthropy. It was always fueled by a cool lust for the most logical way to lead a good life. (3,045)

We rationalists have long had a name for the ‘emotional charge’ that drives ordinary philanthropy. We call it ‘cute puppies with rare diseases. [LW · GW]’ There is a reason most philanthropy accomplishes nothing except fueling that emotional charge, which is that most decisions in most philanthropy are driven by fueling that emotional charge. The entire point, the founding principle, of EA, the core of what is good about EA, is to care about actually accomplishing the mission and cutting the enemy.

Can this be taken too far in various ways to the point where it loses its connection to reality? Does relying too much on the math and not enough on common sense and error checks lead to not noticing wrong conclusions are wrong? Oh yes, this absolutely happens in practice, the SBF group was not so extreme an outlier here.

But at least the crazy kids are trying. At all. They get to be wrong, where most others are not even wrong.

Also, future children in another galaxy? Try our own children, here and now. People get fooled into thinking that ‘long term’ means some distant future. And yes, in some important senses, most of the potential value of humanity lies in its distant future.

But the dangers we aim to prevent, the benefits we hope to accrue? They are not some distant dream of a million years from now. They are for people alive today. You, yes you, and your loved ones and friends and if you have them children, are at risk of dying from AI or from a pandemic. Nor are these risks so improbable that one needs to cite future generations for them to be worthy causes.

I fight the possibility of AI killing everyone, not (only or even primarily) because of a long, long time from now in a galaxy far, far away. I fight so I and everyone else will have grandchildren, and so that those grandchildren will live. Here and now.

If some other EAs made this change because the numbers (overwhelmingly, and in this case I believe correctly) said so, and would have done so even if the case was less overwhelmingly correct? So be it. We need some people like that. Others need to help with global poverty, and so they do. And they make a lot of mistakes there too, they take the math too seriously, they don’t consider second and third order effects properly, and so on. I could go on rants. But you know what? They try, damn it.

As opposed to ordinary philanthropy, where the EAs are right: It’s mostly kinda dumb.

They’d been doing this for only a year and already had been pitched nearly two thousand such projects. They’d handed out some money but in the process they’d concluded that conventional philanthropy was kind of dumb. Just to deal with the incoming requests—most of which they had no ability to evaluate—would require a big staff and lots of expense. Much of their money would end up being used on a vast bureaucracy.

And so they had just recently adopted a new approach: instead of giving money away themselves, they scoured the world for subject matter experts who might have their own, better ideas for how to give away money.

Over the previous six months, one hundred people with deep knowledge of pandemic prevention and artificial intelligence had received an email from FTX that said, in effect: Hey, you don’t know us, but here’s a million dollars, no strings attached. Your job is to give it away as effectively as you can.

The FTX Foundation, started in early 2021, would track what these people did with their million dollars, but only to determine if they should be given even more. “We try not to be very judgy once they have the money,” said Sam. “But maybe we won’t be reupping them.” (3,060)

They were moving fast, as Sam always did. “If you throw away a quarter of the money, that’s very sad,” he said at one point, “but if it allows you to triple the effectiveness of the rest, that’s a win.” (3,073)

This was a really good idea, in the world in which FTX had properly secured the money in order to give it away, and in which they had the proper infrastructure to do this responsibly. Even without either of those things, it was still a reasonable idea.

There were problems. People were unprepared to hand out a million dollars. A lot of decisions involving a lot of money got made, if not Brewster’s Millions style, in ways that were quite warping on the places the money got spread around. From what I heard, essentially any 19-year-old could get a $50,000 grant to move to Berkeley and think about AI safety, and there was a general failure to differentiate good and real and worthwhile efforts from others. The dynamics this created were an invitation to fake work, to predators and entryism and sociopaths, to hype and networks and corruption. If things had continued, that effect could have gotten worse.

As always, Sam was not considering second-order effects, and also not considering that efforts might backfire rather than be wasted. Nor did he pay enough attention to one of the most important questions traders always must ask on every trade they do, which is: What is the correct sizing?

Doing this trade with only a select few would have been great. Doing it with everyone who had an EA identity and a pulse was plausibly net negative.

Won’t Get Fooled Again

In the wake of publication, many people pointed out that Michael Lewis had been fooled, including this book report from David Roth. Michael Lewis did not take kindly to this, while confirming he had been fooled.

Michael Lewis: I’d love for the jury to read the book. Mark Cohen [Sam Bankman-Fried’s lawyer] said this to me: “You get up, you tell one story, and they tell the other story, and the question is which story the jury believes.” I’m in a privileged position to tell a fuller story, without leaving out any of the nasty details. If I were a juror, I would rather hear my story than either defense or prosecution.

I'm just going to tell you the story as I see it, and then leave you the discretion that then you lynch him, acquit him, or don't know what to think of him. I don't want the jury thinking I left anything else they needed to know.

There's something about Sam and the situation that pushes a lot of people's buttons and causes them to want to judge quickly. If I had five hours with the prosecutors, one of the things I would love to know is why they moved so fast. Sam’s lawyers had a guy on the inside and outside advising on when he might be extradited from the Bahamas. And no way did they think it was gonna happen as fast as it did, because they thought it would take the government much longer to figure out what the hell happened. 

I thought that was of a piece with the general social response: how quick people wanted to judge. So I thought, I'm going to be dealing with a reader who is going to be in that judgy kind of mood. 

He really thinks he included all the nasty details. The trial has made it clear this was not the case. Even my old post on FTX, among many other source options, also made it clear this was not the case.

As does the book. The book, despite conspicuously leaving out all the most blatant details, repeatedly shows SBF doing fraud.

Even more than that, the book describes a person who is so obviously doing all the fraud. It would not make any sense for the Sam portrayed here to not be doing all the fraud. That much is clear by the end of chapter one. Did Lewis read his own book?

The idea that the arrest was a ‘rush to judgment’ is laughable. Sure, they partly moved quickly because it was important to send a message - which it was - but also because no one else has ever more obviously been doing all the crime. There are so many distinct frauds right out in the open.

Also, seriously, what the hell, you want to poison the jury pool or even the actual jury? The interviewer points out how our system works, and Lewis says no, it shouldn’t work that way, people should read my book.

At one point, Lewis confirms that FTX violated the Foreign Corrupt Practices Act, as was revealed during the trial, and also perhaps as tellingly that it put a billion dollars into an account at an exchange that regularly freezes accounts frozen by the local police for no reason.

And Lewis is wondering how the money could be missing.

Michael Lewis: This is what happened, to my knowledge—and I know this from the people in Hong Kong who orchestrated it—a Chinese exchange was routinely targeted by local Chinese police departments. They would find a pretext for freezing an account. In this case, they froze Alameda’s account. And Alameda had like a billion dollars in this account. 

There was an operation inside of FTX, or Alameda, whatever, in Hong Kong, to legitimately get the money back without having to go pay the ransom. And they pursued that, and it didn't work. So they went in and paid ransom to the Chinese police department directly to get the money released, and it was released. 

I actually would have loved to include it. But they weren’t running around bribing people to change laws. It was more telling about how the Chinese government works than anything about Sam. I then came back and confirmed it all with Sam. He said he knew he'd sent someone in to get the money out, but he wasn't completely sure how he'd gotten the money out. 

Next he admits that SBF committed bank fraud.

Michael Lewis: My impression was that the bank wasn't actually misled about who these people were. That it was a kind of fig leaf thing.

Q: But it’s still illegal to mislead a bank about the purpose of a bank account.

Michael Lewis: But nobody would have cared about it.

He seems to not understand that this does not make it not a federal crime? That ‘we probably would not have otherwise gotten caught on this one’ is not a valid answer?

Similarly, Lewis clearly thinks ‘the money was still there and eventually people got paid back’ should be some sort of defense for fraud. It isn’t, and it shouldn’t be.

And then there’s this:

Michael Lewis: No one shows me receipts. But no one suggested otherwise. I interviewed 10 people in Alameda. They just weren't lying. None of them could think of a big loss.

But I was kind of there through it. And there was no detectable change in Caroline, Nishad, Sam: their interactions or their demeanors. If they had this sharp, swift loss, they were really, really good with their poker faces. 

All right, that is a purer Naive Guy statement. They just weren’t lying, no sir.

Nor was Sam a liar, in Lewis’s eyes. Michael Lewis continued to claim, on the Judging Sam podcast, that he could trust Sam completely. That Sam would never lie to him. True, Lewis said, Sam would not volunteer information and he would use exact words. But Sam’s exact words to Lewis, unlike the words he saw Sam constantly spewing to everyone else, could be trusted.

It’s so weird. How can the same person write a book, and yet not have read it?

Even then, on October 1, Lewis was claiming he did not know if Sam was guilty. Not only that, he was claiming that many of the prosecutors did not know if Sam was guilty. And Lewis keeps saying that Sam himself really actually believes he is innocent, and for weeks after it was so over Sam really believed he’d be able to raise funds and turn it all around.

Lewis really did believe, or claimed to believe on his podcast, even in early October that, absent one little mistake where $8 billion dollars ended up in the wrong place, the rest of what happened was fine. That the rest of the story was not filled to the brim with all the crime.

Yet I totally believe that Lewis believed all of it. The man seems so totally sincere.

Then on October 9 Lewis said nothing that came out so far at the trial surprised him, other than the claim by one of Sam’s oldest friends that Alameda’s special code not only let them steal all the money, it also let them trade faster than their competitors, implemented on Sam’s orders. Everyone was constantly asking point blank about that, and Sam constantly said that wasn’t true. Even so, Lewis still repeated that in his model Sam doesn’t outright lie, he simply doesn’t tell you the answer that you needed to hear. He was still holding onto that even then.

When it is revealed that the FTX insurance fund to cover trading losses, that Sam often talks about, was purely fake, literally the product of a random number generator written into the code to display to people to make them think there was an insurance fund? Because to Sam money is fungible, so why would there be an insurance fund? Still no change.

I still can’t process all that. Not really. Chewbaca is a wookie. It does not make sense.

He has Matt Levine on the podcast, and Matt Levine points out that the book made it that much clearer that Sam’s fraud unfolded exactly the way frauds always unfold, that there was nothing confusing here. Yeah, on some level Sam fooled himself that would all work out (or, given it was Sam, that it had odds, and the words ‘safe’ and ‘risk’ were meaningless, so who cares?).

In later podcasts, Lewis did admit that a lot of the trial testimony was rather damning, and that he is confident that Sam will be convicted. But there is no sign he has figured out that Sam was doing all the lying and all the crime.

Conclusion

I mostly feel good closing the book on the events of SBF, Alameda and FTX. It all makes sense. We know what happened.

There are still a few mysteries, mostly centered on early Alameda. The story there, as outlined, continues not to make sense. Why was it so difficult to evaluate ModelBot? What was going on with the demands of those exiting? How did SBF get away with so little reputation damage? I still do want to know. Mostly, though, I am content.

My previous post on FTX holds up remarkably well, and could be used as a companion piece to this one. I was missing pieces of the puzzle, and definitely made mistakes, including the failure to buy up FTX debt for pennies on the dollar. But the rough outline there of what happened holds up, as does the discussion of implications for Effective Altruism.

I do not think that any of what happened was an accident. SBF was fortunate to get as far as he did before it all blew up. A blow up was the almost inevitable result. While SBF went off the rails, he went off the rails in ways that should have been largely predicted, and which make sense given who he was and then the forces and philosophical ideas that acted upon him.

This was not so unusual a case of fraud.

Nor was it an unusual case of what happens when a maximalist goal is given to a highly capable consequentialist system.

My expectation is that in the unlikely scenario that this attempted takeoff had fully succeeded, and SBF had gained sufficient affordances and capabilities thereby, that the misalignment issues involved would have almost certainly destroyed us all, or all that we care about. Luckily, that did not come to pass.

Other attempts are coming.

All of this has happened before.

All of this will happen again.

110 comments

Comments sorted by top scores.

comment by oumuamua · 2023-10-25T19:38:29.484Z · LW(p) · GW(p)

Murder is just a word. ... SBF bites all the bullets, all the time, as we see throughout. Murder is bad because look at all the investments and productivity that would be lost, and the distress particular people might feel

You are saying this as if you disagreed with it. In this case, I'd like to vehemently disagree with your disagreeing with Sam.

Murder really is bad because of all the bad things that follow from it, not because there is some moral category of "murder", which is always bad. This isn't just "Sam biting all the bullets", this is basic utilitarianism 101, something that I wouldn't even call a bullet. The elegance of this argument and arguments like it is the reason people like utilitarianism, myself included.

Believing this has, in my opinion, morally good consequences. It explains why murdering a random person is bad, but very importantly does not explain why murdering a tyrant is bad, or why abortion is bad. Deontology very easily fails those tests, unless you're including a lot of moral "epicycles".

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2023-10-25T20:06:36.575Z · LW(p) · GW(p)

. The elegance of this argument and arguments like it is the reason people like utilitarianism, myself included.

 

Excessive bullet biting for the pursuit of elegance is a road to moral ruin.  Human value is complex.  To be a consistent agent in Deontology, Virtue Ethics, or Utilitarianism, you necessarily have to (at minimum) toss out the other two. But morally, we actually DO value aspects of all 3 - we really DO think it's bad to murder someone outside of the consequences of doing so, and it feels like adding epicycles to justify that moral intuition with reasons  when there is indeed a deontological core to some of our moral intuitions. Of course, there's also a core of utilitarianism and virtue ethics that would all suggest not murdering - but throwing out things you actually value in terms of your moral intuitions in the name of elegance is bad, actually.

Replies from: nathaniel-monson
comment by Nathaniel Monson (nathaniel-monson) · 2023-10-26T01:39:47.943Z · LW(p) · GW(p)

This is more a tangent than a direct response--I think I fundamentally agree with almost everything you wrote--but I dont think virtue ethics requires tossing out the other two (although I agree both of the others require tossing out each other). 

I view virtue ethics as saying, roughly, "the actually important thing almost always is not how you act in contrived edge case thought experiments, but rather how how habitually act in day to day circumstances. Thus you should worry less, probably much much less, about said thought experiments, and worry more about virtuous behavior in all the circumstances where deontology and utilitarianism have no major conflicts". I take it as making a claim about correct use of time and thought-energy, rather than about perfectly correct morality. It thus can extend to "...and we think (D/U) ethics are ultimately best served this way, and please use (D/U) ethics if one of those corner cases ever shows up" for either deontology or (several versions of) utilitarianism, basically smoothly.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2023-10-31T20:40:17.626Z · LW(p) · GW(p)

I think virtue ethics is a practical solution, but if you just say "if corner cases show up, don't follow it" means you're doing something else other than being a virtue ethicist.

comment by evhub · 2023-10-25T04:36:08.712Z · LW(p) · GW(p)

But when you think that suffering is the thing that matters, you confuse the map for the territory, the measure for the man, the math with reality.

I like this quote a lot—I feel like it captures a lot of why I don't like suffering-focused ethics. It also seems very related to beliefs about the moral value of animals: my guess is that a wide variety of non-human animals can experience suffering, but very few can live a meaningful and fulfilling life. If you primarily care about suffering, then animal welfare is a huge priority, but if you instead care about meaning, fulfillment, love, etc., then it's much less clearly important.

Replies from: Lukas_Gloor, dylan-mahoney, Slapstick, NinaR
comment by Lukas_Gloor · 2023-10-25T14:17:54.839Z · LW(p) · GW(p)

I also like the quote. I consider meaning and fulfillment of life goals morally important, so I'm against one-dimensional approaches to ethics.

However, I think it's a bit unfair that just because the quote talks about suffering (and not pleasure/positive experience), you then go on to talk exclusively about suffering-focused ethics.

Firstly, "suffering-focused ethics" is an umbrella term that encompasses several moral views, including very much pluralistic ones (see the start of the Wikipedia article or the start of this initial post).

Second, even if (as I do from here on) we assume that you're talking about "exclusively suffering-focused views/axiologies," which I concede make up a somewhat common minority of views in EA at large and among suffering-focused views in particular, I'd like to point out that the same criticism (of "map-and-territory confusion") applies just as much, if not more strongly, against classical hedonistic utilitarian views. I would also argue that classical hedonistic utilitarianism has had, at least historically, more influence among EAs and that it describes better where SBF himself was coming from (not that we should give much weight to this last bit).

To elaborate, I would say the "failure" (if we want to call it that) of exclusively suffering-focused axiologies is incompleteness rather than mistakenly reifying a proxy metric for its intended target. (Whereas the "failure" of classical hedonism is, IMO, also the latter.) I think suffering really is one of the right altruistic metrics.

The IMO best answer to "What constitutes (morally relevant) suffering?" is something that's always important to the being that suffers. I.e., suffering is always bad (or, in its weakest forms, suboptimal) from the perspective of the being that suffers. I would define suffering as an experienced need to change something about one's current experience. (Or end said experience, in the case of extreme suffering.)

(Of course, not everyone who subscribes to a form of suffering-focused ethics would see it that way – e.g., people who see the experience of pain asymbolia as equally morally disvaluable as what we ordinary call "pain" have a different conception of suffering. Similarly, I'm not sure whether Brian Tomasik's pan-everythingism about everything would give the same line of reasoning as I would for caring a little about "electron suffering," or whether this case is so different and unusual that we have to see it as essentially a different concept.) 

And, yeah, bringing to our mind the distinction between map and territory, when we focus on the suffering beings and not the suffering itself, we can see that there are some sentient beings ("moral persons" according to Singer) to whom things other than their experiences can be important.

Still, I think the charge "you confuse the map for the territory, the measure for the man, the math with reality" sticks much better against classical hedonistic utilitarianism. After all, take the classical utilitarian's claim "pleasure is good." I've written about this in a short form on the EA forum [EA(p) · GW(p)].  As I would summarize it now, when we talk about "pleasure is good," there are two interpretations behind this that can be used for motte-and-bailey. I will label these two claims "uncontroversial" and "controversial." Note how the uncontroversial claim has only vague implications, whereas the controversial one has huge and precise implications (maximizing hedonist axiology). 

(1) Uncontroversial claim: When we say that pleasure is good, we mean that all else equal, pleasure is always unobjectionable, and often it is what we higher-order desire.

This uncontroversial claim is compatible with "other things also matter morally."

(For comparison, the uncontroversial interpretation for "suffering is bad" is "all else equal, suffering is always [at least a bit] objectionable, and often something we higher-order desire against.")

(2) Controversial claim: When we say that pleasure is good, what we mean is that we ought to be personal hedonist maximizers. This includes claims like "all else equal, more pleasure is always better than less pleasure," among a bunch of other things.

"All else equal, more pleasure is always better than less pleasure" seems false. At the very least, it's really controversial (that's why it's not part of the the uncontroversial claim, where it just says "pleasure is always unobjectionable.") 

When I'm cozily in bed half-asleep and cuddled up next to my soulmate and I'm feeling perfectly fulfilled in life in this moment, the fact that my brain's molecules aren't being used to generate even more hedons is not a problem whatsoever. 

By contrast, "all else equal, more suffering is always worse than less suffering" seems to check out – that's part of the uncontroversial interpretation of "suffering is bad." 

So, "more suffering is always worse" is uncontroversial, while "more intensity of positive experience is always better (in a sense that matters morally and is worth tradeoffs)" is controversial. 

That's why I said the following earlier on in my comment here: 

I would say the "failure" (if we want to call it that) of exclusively suffering-focused axiologies is incompleteness rather than mistakenly reifying a proxy metric for its intended target. (Whereas the "failure" of classical hedonism is, IMO, also the latter.) I think suffering really is one of the right altruistic metrics.

But "maximize hedons" isn't. 

The point to notice for proponents of an exclusively suffering-focused axiology is that humans have two motivational systems, not just the system-1 motivation that I see as being largely about the prevention of short-term cravings/suffering. Next to that, there's also also higher-order, "reflective" desires. These reflective desires are often (though not in everyone) about (specific forms of) happiness or things other than experiences (or, as a perhaps better way to express this, they are also about how specific experiences are embedded in the world, their contact with reality.) 

Replies from: evhub
comment by evhub · 2023-10-25T19:28:20.581Z · LW(p) · GW(p)

When I'm cozily in bed half-asleep and cuddled up next to my soulmate and I'm feeling perfectly fulfilled in life in this moment, the fact that my brain's molecules aren't being used to generate even more hedons is not a problem whatsoever.

Obviously I agree with this. I find it strange that you would take me to be disagreeing with this and defending some sort of pure pleasure version of utilitarianism. What I said was that I care about "meaning, fulfillment, love"—not just suffering, and not just pleasure either.

Where I agree with classical utilitarianism is that we should compute goodness as a function of experience, rather than e.g. preferences or world states (and then integrate over your anthropic prior, as in UDASSA [LW · GW]). But I think that function is extremely complex, dependent on one's entire lifetime, and not simply reducible to basic proxies like pleasure or pain.

I think I would also go a bit further, and claim that, while I agree that both pain and pleasure should be components of what makes a life experience good or bad, neither pain nor pleasure should be very large components on their own. Like I said above, I tend to think that things like meaning and fulfillment are more important.

Replies from: Lukas_Gloor, MichaelStJules
comment by Lukas_Gloor · 2023-10-25T20:05:44.246Z · LW(p) · GW(p)

Obviously I agree with this. I find it strange that you would take me to be disagreeing with this and defending some sort of pure pleasure version of utilitarianism. What I said was that I care about "meaning, fulfillment, love"—not just suffering, and not just pleasure either.

That seems like a misunderstanding – I didn't mean to be saying anything about your particular views!

I only brought up classical hedonistic utilitarianism because it's a view that many EAs still place a lot of credence on (it seems more popular than negative utilitarianism?). Your comment seemed to me to be unfairly singling out something about (strongly/exclusively) suffering-focused ethics. I wanted to point out that there are other EA-held views (not yours) where the same criticism applies the same or (arguably) even more.

comment by MichaelStJules · 2023-10-26T07:55:36.204Z · LW(p) · GW(p)

Where I agree with classical utilitarianism is that we should compute goodness as a function of experience, rather than e.g. preferences or world states

Isn't this incompatible with caring about genuine meaning and fulfillment, rather than just feelings of them? For example, it's better for you to feel like you're doing more good than to actually do good. It's better to be put into an experience machine and be systematically mistaken about everything you care about, i.e. that the people you love even exist (are conscious, etc.) at all, even against your own wishes, as long as it feels more meaningful and fulfilling (and you never find out it's all fake, or that can be outweighed). You could also have what you find meaningful changed against your wishes, e.g. made to find counting blades of grass very meaningful, more so than caring for your loved ones.

FWIW, this is also an argument for non-experientialist "preference-affecting" views, similar to person-affecting views. On common accounts of weigh or aggregate, if there are subjective goods, then they can be generated and outweigh the violation and abandonment of your prior values, even against your own wishes, if they’re strong enough.

Replies from: evhub
comment by evhub · 2023-10-26T18:05:44.233Z · LW(p) · GW(p)

The way you describe it you make it sound awful, but actually I think simulations are great and that you shouldn't think that there's a difference between being in a simulation and being in base reality (whatever that means). Simple argument: if there's no experiment that you could ever possibly do to distinguish between two situations, then I don't think that those two situations should be morally distinct.

Replies from: MichaelStJules
comment by MichaelStJules · 2023-10-26T19:12:23.475Z · LW(p) · GW(p)

Well, there could be ways to distinguish, but it could be like a dream, where much of your reasoning is extremely poor, but you're very confident in it anyway. Like maybe you believe that your loved ones in your dream saying the word "pizza" is overwhelming evidence of their consciousness and love for you. But if you investigated properly, you could find out they're not conscious. You just won't, because you'll never question it. If value is totally subjective and the accuracy of beliefs doesn't matter (as would seem to be the case on experientialist accounts), then this seems to be fine.

Do you think simulations are so great that it's better for people to be put into them against their wishes, as long as they perceive/judge it as more meaningful or fulfilling, even if they wouldn't find it meaningful/fulfilling with accurate beliefs? Again, we can make it so that they don't find out.

Similarly, would involuntary wireheading or drugging to make people find things more meaningful or fulfilling be good for those people?

Or, something like a "meaning" shockwave, similar to a hedonium [? · GW] shockwave, — quickly killing and replacing everyone with conscious systems that take no outside input or even have sensations (or only the bare minimum) other than to generate feelings or judgements of meaning, fulfillment, or love? (Some person-affecting views could avoid this while still matching the rest of your views.)

Of course, I think there are good practical reasons to not do things to people against their wishes, even when it's apparently in their own best interests, but I think those don't capture my objections. I just think it would be wrong, except possibly in limited cases, e.g. to prevent foreseeable regret. The point is that people really do often want their beliefs to be accurate, and what they value is really intended — by their own statements — to be pointed at something out there, not just the contents of their experiences. Experientialism seems like an example of Goodhart's law to me, like hedonism might (?) seem like an example of Goodhart's law to you.

I don't think people and their values are in general replaceable, and if they don't want to be manipulated, it's worse for them (in one way) to be manipulated. And that should only be compensated for in limited cases. As far as I know, the only way to fundamentally and robustly capture that is to care about things other than just the contents of experiences and to take a kind of preference/value-affecting view.

Still, I don't think it's necessarily bad or worse for someone to not care about anything but the contents of their experiences. And if the state of the universe was already hedonium or just experiences of meaning, that wouldn't be worse. It's the fact that people do specifically care about things beyond just the contents of their experiences. If they didn't, and also didn't care about being manipulated, then it seems like it wouldn't necessarily be bad to manipulate them.

comment by Pretentious Penguin (dylan-mahoney) · 2023-10-25T08:29:26.837Z · LW(p) · GW(p)

What thought process do you think goes into your guess that very few non-human animals can leave a meaningful and fulfilling life? My guess is that many mammals and birds can live a meaningful and fulfilling life, though the phrase “meaningful and fulfilling” strikes me as hard to specify. I’m mostly thinking that having emotionally significant social bonds with other individuals is sufficient for a life to be meaningful and fulfilling, and that many mammals and birds can form emotionally significant social bonds.

Replies from: MichaelStJules
comment by MichaelStJules · 2023-10-26T07:53:14.935Z · LW(p) · GW(p)

And if emotionally significant social bonds don't count, it seems like we could be throwing away what humans typically find most important in their lives.

Of course, I think there are potentially important differences. I suspect humans tend to be willing to sacrifice or suffer much more for those they love than (almost?) all other animals. Grief also seems to affect humans more (longer, deeper), and it's totally absent in many animals.

On the other hand, I guess some other animals will fight to the death to protect their offspring. And some die apparently grieving. This seems primarily emotionally driven, but I don't think we should discount it for that fact. Emotions are one way of making evaluations, like other kinds of judgements of value.

EDIT: Another possibility is that other animals form such bonds and could even care deeply about them, but don't find them "meaningful" or "fulfilling" at all or in a way as important as humans do. Maybe those require higher cognition, e.g. concepts of meaning and fulfillment. But it seems to me that the deep caring, in just emotional and motivational terms, should be enough?

Replies from: Slapstick
comment by Slapstick · 2023-10-27T18:52:14.336Z · LW(p) · GW(p)

Interesting topic

I think that unless we can find a specific causal relationship implying that the capacity to form social bonds increases overall well-being capacity, we should assume that attaching special importance to this capacity is merely a product of human bias.

Humans typically assign an animal's capacity for wellbeing and meaningful experience based on a perceived overlap, or shared experience. As though humans are this circle in a Ven diagram, and the extent to which our circle overlaps with an iguana's circle is the extent to which that iguana has meaningful experience.

I think this is clearly fallacious. An iguana has their own circle, maybe the circle is smaller, but there's a huge area of non-overlap that we can't just entirely discount because we're unable to relate to it. We can't define meaningful experience by how closely it resembles human experience.

Replies from: MichaelStJules
comment by MichaelStJules · 2023-10-27T21:48:22.538Z · LW(p) · GW(p)

I would be surprised if iguanas find things meaningful that humans don't find meaningful, but maybe they desire some things pretty alien to us. I'm also not sure they find anything meaningful at all, but that depends on how we define meaningfulness.

Still, I think focusing on meaningfulness is also too limited. Iguanas find things important to them, meaningful or not. Desires, motivation, pleasure and suffering all assign some kind of importance to things.

In my view, either

  1. capacity for welfare is something we can measure and compare based on cognitive effects, like effects on attention, in which case it would be surprising if other verteberates, say, had tiny capacities for welfare relative to humans, or
  2. interpersonal utility comparisons can't be grounded, so there aren't any grounds to say iguanas have lower (or higher) capacities for welfare than humans, assuming they have any at all.
comment by Slapstick · 2023-10-26T05:34:31.746Z · LW(p) · GW(p)

I would be interested in an explanation of how the quote captures why you don't like suffering focused ethics.

My (possibly nieve) perspective is that people who downplay the relative moral significance of suffering just have a lack of relevant experience when it comes to qualia states.

If someone hasn't experienced certain levels of suffering over certain durations, how can they reasonably judge that hundreds of billions of years worth of those experiences are relatively insignificant?

If you primarily care about suffering, then animal welfare is a huge priority, but if you instead care about meaning, fulfillment, love, etc., then it's much less clearly important.

It's hard for me not to interpret the word 'care' here as relating to attention, rather than intrinsic values. To me it seems like if someones attention were calibrated such that they had a deep understanding of the implication of billions of animals having surgery done on them without anesthesia, while also understanding the implications of people potentially having marginally more meaningful lives, they would generally consider the animal issue to be more pressing.

I'm quite interested in what you might think I'm missing. I often find myself very confused about people's perspectives here.

comment by Nina Panickssery (NinaR) · 2023-10-27T13:09:00.349Z · LW(p) · GW(p)

my guess is that a wide variety of non-human animals can experience suffering, but very few can live a meaningful and fulfilling life. If you primarily care about suffering, then animal welfare is a huge priority, but if you instead care about meaning, fulfillment, love, etc., then it's much less clearly important

Very well put

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-25T02:23:11.494Z · LW(p) · GW(p)

Was there a reckoning, a post-mortem, an update, for those who need one? Somewhat. Not anything like enough.


I feel like you aren't giving enough credit here (and possibly just underestimating the strength of the effect?) IMO the EA community has had a reckoning, a post-mortem, an update, etc. far more than most social or political movements would (and do) in response to similar misbehavior from a prominent member. And for sufficintly large groups of people, there is no reckoning at all, because there is safety in numbers -- if a normal person commits a crime, other normal people who haven't committed crimes yet don't feel any pressure to be less normal.

I'm curious to operationalize forecasting questions based on this. Maybe something like "will there be another instance of a prominent EA committing fraud?" 

Replies from: Benito, Xodarap
comment by Ben Pace (Benito) · 2023-10-25T04:08:08.001Z · LW(p) · GW(p)

a post-mortem

What is this f***ing post-mortem? What was the root-cause analysis? Where is the list of changes that have been made to prevent an impulsive and immoral man like Sam taking tons of resources, talent and prestige from the Effective Altruism ecosystem and performing crimes of a magnitude for which a typical human lifetime is not long enough to make right? Was it due to the rapid growth beyond the movement's ability to vet people? Was it due to people in leadership being afraid to investigate accusations of misbehavior? What was the cause here that has been fixed?

Please do not claim that things have been fixed without saying concretely what you believe has been fixed. I have seen far too many people continue roughly business as usual. It sickens me.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-25T05:45:20.251Z · LW(p) · GW(p)

Do you agree with my comparative claim? EA vs. Democrats or Republicans, for example, or EA vs. Social Justice, or EA vs. idk pick some other analogous movement. 

I could make a bigass list of EA forum and LW posts arguing about how to interpret what happened and lashing out with various bits of blame here and there. Pretty much all of the lessons/criticisms Zvi makes in this post have been made multiple times before. Including by e.g. Habryka, whom I respect greatly and admire for doing so. But I don't feel motivated to make this list and link it here because I'm pretty sure you've read it all too; our disagreement is not about the list but whether the list is enough.

Notice, also, that I didn't actually say "The problem is fixed." I instead expressed doubt in the "not anything like enough" claim. I mused that it would be good to make some forecastable predictions. This was because I myself am unsure about what to think here. I have appreciated the discussions and attempted lesson-taking from the SBF disaster and I'm glad it's happening & I support doing more of it.

[I feel like this conversation is getting somewhat heated btw; if you like I'd be happy to have a phone or video call or just plan to meet up in person some day if you like. This is not an ask, just an offer just in case you'd like that.]

I think your description of what happened with SBF is accurate, and I think that it is significantly less likely to happen again given how the SBF thing blew up and how people have reacted to it. (I take it we disagree about this.)

That said, I'm maybe not as plugged in to the EA community as I should be? idk. I'd be curious to hear what your concerns are -- e.g. are there people who seem to you to be impulsive and immoral and on a path to gain in prestige and influence? Fair enough if you don't want to say who they are (though I'd love that) but I'd be curious to hear whether you have specific people in mind or are just saying abstractly that we are still vulnerable to this failure mode since we haven't fixed the root causes.

I think if I were to answer that question, I'd probably say something like "I know of one or two very sketchy people but they don't seem to have much influence. Then there is Anthropic which seems to me to be a potential sbf-risk: lots of idealistic people, very mission-driven, lots of trust in leadership, making very important decisions about the fate of humanity. Could go very badly if leadership is bad in the ways SBF was bad. That said I don't expect that to be the case & would be curious to get concrete and make forecasts and hear evidence. I currently think Anthropic is net-positive in expectation, which is saying a lot since it's an AGI company and I think there's somthing like a 70% chance of unaligned AGI takeover by the end of this decade."

I don't feel confident about any of this.

 

Replies from: Zvi, Benito, Benito, Benito, clone of saturn
comment by Zvi · 2023-10-25T12:44:00.805Z · LW(p) · GW(p)

I think the following can be and are both true at once:

  1. What happened was not anything like enough.
  2. What happened was more than one would expect from a political party, or a social movement such as social justice.
Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-25T13:53:48.806Z · LW(p) · GW(p)

I certainly agree this is possible. Insofar as you think that's not only possible but actual, then thanks, that's a helpful clarification of your position. Had you said something like this above I probably wouldn't have objected, at least not as strongly, and instead would have just asked for predictions.

 

comment by Ben Pace (Benito) · 2023-10-25T18:30:01.165Z · LW(p) · GW(p)

Do you agree with my comparative claim? EA vs. Democrats or Republicans, for example, or EA vs. Social Justice, or EA vs. idk pick some other analogous movement.

What I expect for a movement of this scale or larger where a prominent figure has a scandal of this level, is that many people wring their hands over it, some minor changes, people taking lots of defensive PR actions, but nobody is in a position to really fix the underlying problems and it isn't really tried. Some substantive status allocation changes and trust is lowered, and then it continues on regardless. I currently cannot distinguish the Effective Altruism ecosystem from this standard story. Beyond FTX, who has been fired? Who has stepped forward and taken responsibility? Who has admitted to major fault?

I suspect the main thing that has happened better in the EA ecosystem does less actively criminal or unethical behavior in the cover-up and in the PR defense, while not actually fixing anything. That is a low bar and this is still best described as "a failure".

I also think any of those movements/ecosystems would have a ton of energy for discussion and finger-pointing and attempts to use this issue to change people's status. Perhaps you are misreading "lots of bickering" as "there has been a reckoning". The EA Forum is filled with squabbling of this sort and is a substantial reason for why I do not read it. 

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-25T21:22:32.696Z · LW(p) · GW(p)

That's helpful thanks. Can you give an example of a movement of this scale or larger that had a scandal of this level, and reacted better than EA did?

IIRC Will MacAskill admitted to major fault, though I don't remember what he said and wasn't paying close attention. Here's the statement I remembered: A personal statement on FTX — EA Forum (effectivealtruism.org) [EA · GW

I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception.

...

If FTX misused customer funds, then I personally will have much to reflect on. Sam and FTX had a lot of goodwill – and some of that goodwill was the result of association with ideas I have spent my career promoting. If that goodwill laundered fraud, I am ashamed.

As a community, too, we will need to reflect on what has happened, and how we could reduce the chance of anything like this from happening again. Yes, we want to make the world better, and yes, we should be ambitious in the pursuit of that.

But that in no way justifies fraud. If you think that you’re the exception, you’re duping yourself.

We must make clear that we do not see ourselves as above common-sense ethical norms, and must engage criticism with humility.

I know that others from inside and outside of the community have worried about the misuse of EA ideas in ways that could cause harm. I used to think these worries, though worth taking seriously, seemed speculative and unlikely.

I was probably wrong. I will be reflecting on this in the days and months to come, and thinking through what should change.

I read this as an admission of guilt and responsibility. What do you wish he had said?

Replies from: Benito, habryka4, Zvi, Benito, habryka4
comment by Ben Pace (Benito) · 2023-10-25T22:23:55.192Z · LW(p) · GW(p)

IIRC Will MacAskill admitted to major fault...

I read this as an admission of guilt and responsibility. What do you wish he had said?

Does it matter what he said? What has he done? As far as I'm aware he is mostly getting along with being a prominent figurehead of EA and a public intellectual.

Also this is hardly an admission of guilt. It primarily says "This seems bad and I will reflect on it." He didn't say 

"This theft of many thousands of people's life savings will forever be part of the legacy of Effective Altruism, and I must ensure that this movement is not responsible for something even worse in the future. I take responsibility for endorsing and supporting this awful person and for playing a key role in building an ecosystem in which he thrived. I have failed in my leadership position and I will work to make sure this great injustice cannot happen again and that the causes are rectified, and if I cannot accomplish that with confidence within 12 months then I will no longer publicly support the Effective Altruism movement."

comment by habryka (habryka4) · 2023-10-25T22:03:41.915Z · LW(p) · GW(p)

I read this as an admission of guilt and responsibility. What do you wish he had said?

I think it's a decent opening and it clearly calls for reflection, but you might notice that indeed no further reflection has been published, and Will has not published anything that talks much about what lessons he has taken away from them.

To be clear, as I understand the situation Will did indeed write up a bunch of reflections, but then the EV board asked him not to because that posed too much legal and PR risk. I agree this is some evidence about Will showing some remorse, but also evidence that the overall leadership does not care very much about people learning from what happened (at least compared to increased PR and legal risk). 

Replies from: pktechgirl, Zvi
comment by Elizabeth (pktechgirl) · 2023-10-26T02:25:32.040Z · LW(p) · GW(p)

I think this is a potentially large cost of the fiscal sponsorship umbrella. Will can't take on the risk personally or even for just his org, it's automatically shared with a ton of other orgs.

Replies from: Benito
comment by Ben Pace (Benito) · 2023-10-28T23:18:19.479Z · LW(p) · GW(p)

That seems quite plausible. If that is his reasoning, then I think he should say that.

"I had planned to write in more details about my relationship to Sam and FTX, what actions I took, and in what ways I think my actions did and did not enable these crimes to take place; but due to concerns about risking the jobs of 100+ people I have chosen to not share information about this for the following 1-4 years (that is, until any legal and financial investigation of Effective Ventures has concluded, an org that I'm on the board of and that facilitated a lot of financial grantmaking for FTX). 

This obviously largely prohibits the Effective Altruism ecosystem from carrying out a collective fact-finding effort around those who were closely involved with Sam and FTX within the next 1-4 years, and substantially obstructs a clear fault analysis and post-mortem from occurring, and I expect as a result of this many readers should correctly update that by-default that the causes of these problems will not be fixed. 

I hope that this is not the death of the Effective Altruism ecosystem that I have worked to build over the last 10+ years, but I am not sure how people working and living in this ecosystem can come to trust that crimes of a similar magnitude will not happen again after seeing little-to-no accounting of how this criminal was funded and supported, nor any clear fixes implemented in the ecosystem to prevent such crimes from occurring in the future, and I sadly expect many good people will rightly leave the ecosystem because of it."

comment by Zvi · 2023-10-25T22:44:46.300Z · LW(p) · GW(p)

Pretty big if true. If EV actively is censoring attempts to reflect upon what happened, then that is important information to pin down. 

I would hope that if someone tried to do that to me, I would resign. 

Replies from: habryka4
comment by habryka (habryka4) · 2023-10-25T22:48:35.427Z · LW(p) · GW(p)

That's what I told Will to do. He felt like that would be uncollaborative with broader EA leadership. 

comment by Zvi · 2023-10-25T22:43:14.999Z · LW(p) · GW(p)

I wish he had said (perhaps after some time to ponder) "I now realize that SBF used FTX to steal customer funds. SBF and FTX had a lot of goodwill, that I contributed to, and I let those people and the entire community down.

As a community, we need to recognize that this happened in part because of us. And I recognize that this happened partly because of me, in particular. Yes, we want to make the world better, and yes, we should be ambitious in the pursuit of that. But we have been doing so in a way that we can now see can set people on extremely dark and destructive paths. 

No promise to do good justifies fraud, or the encouragement of fraud. We have to find a philosophy that does not drive people towards fraud. 

We must not see or treat ourselves as above common-sense ethical norms, and must engage criticism with humility. We must fundamentally rethink how to embody utilitarianism where it is useful, within such a framework, recognizing that saying 'but don't lie or do fraud' at the end often does not work.

I know others have worried that our formulation of EA ideas could lead people to do harm. I used to think this was unlikely. I now realize it was not, and that this was part of a predictable pattern that we must end, so that we can be a force for good once more.

I was wrong. I will continue to reflect in the coming months."

And then, ya know, reflect, and do some things.

The statement he actually made I interpret as a plea for time to process while affirming the bare minimum. Where was his follow-up?

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-26T02:53:49.328Z · LW(p) · GW(p)

Your proposal seems to me to be pretty similar to what he actually said, just a bit stronger here and there. Ben's proposal below, by contrast, is much stiffer stuff, mostly because of the last sentence.

comment by Ben Pace (Benito) · 2023-10-25T22:11:05.810Z · LW(p) · GW(p)

Can you give an example of a movement of this scale or larger that had a scandal of this level, and reacted better than EA did?

None come to mind. (To be clear, this doesn't seem cruxy for whether Effective Altruism has succeeded at reforming itself.)

I think instructive examples to look into would be things like:

  • How the justice system itself investigates crimes. I really like reading published reports where an investigator has been given a lot of resources to figure something out and then writes up what they learned. In many countries it is illegal to lie to an investigator when they are investigating a crime, which means that someone can go around and just ask what happened, then share that and prosecute any unlawful behavior.
  • How countries deal with their own major human rights violations. I am somewhat interested in understanding things like how the Truth and Reconciliation process went in South Africa, and also how Germany has responded post WWII, where I think both really tried to reform to ensure that the same thing couldn't happen again.
  • How companies investigate disasters. Sometimes a massive company will have a disaster or screw-up (e.g. the BP Oil Spill, the Boeing crashes, Johnson & Johnson Tylenol poisoning incident) and sometimes conduct serious investigations and try to fix the problem. I'd be interested in reading successful accounts there and how they went about finding the source of the problem and fixing it.
  • Religious reformations. The Protestant split was in response to a bunch of theological and pragmatic disagreements and also concerns of corruption (the clergy leading lavish lives). I'd prefer to not have a split and instead have a reform, I suspect there are other instances of major religious reform that went well that one can learn lessons from (of course also many to avoid).
comment by habryka (habryka4) · 2023-10-25T22:31:05.210Z · LW(p) · GW(p)

Can you give an example of a movement of this scale or larger that had a scandal of this level, and reacted better than EA did?

I think almost any large organization/company would have gone through a much more comprehensive fault-analysis and would have made many measurable improvements. For examples of this you could read through the history of Apple, or Tesla, or TSMC, or Intel. You could also look into the reforms that happened to lots of investment banks post 2008.

Companies are different than social movements, though my sense is that in the history of religion there have also been many successful reform efforts in response to various crises, which seems more similar.

As another interesting example, it also seems to me that Germany pretty successfully reformed its government and culture post World-War 2.

Replies from: Linch, Xodarap
comment by Linch · 2023-10-25T22:38:01.912Z · LW(p) · GW(p)

I think Germany is an extreme outlier here fwiw, (eg) Japan did far worse things and after WW2 cared more about covering up wrongdoing than with admitting fault; further, Germany's government and cultural "reformation" was very much strongarmed by the US and other allies, whereas the US actively assisted Japan in covering up war crimes.

EDIT: See shortform elaboration: https://www.lesswrong.com/posts/s58hDHX2GkFDbpGKD/linch-s-shortform?commentId=ywf8R3CobzdkbTx3d [LW(p) · GW(p)] 

Replies from: Linch, daniel-kokotajlo
comment by Linch · 2023-11-04T06:43:35.237Z · LW(p) · GW(p)

Here are some notes on why I think Imperial Japan was unusually bad, [LW(p) · GW(p)] even by the very low bar set by the Second World War.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-26T02:47:38.700Z · LW(p) · GW(p)

Curious why you say "far worse" rather than "similarly bad" though this isn't important to the main conversation.

Replies from: Linch, daniel-kokotajlo
comment by Linch · 2023-10-26T03:26:39.621Z · LW(p) · GW(p)

I started writing a comment reply to elaborate after getting some disagreevotes on the parent comment, but decided that it'd be a distraction from the main conversation; I might expand on my position in an LW shortform at some point in the near future.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-26T04:00:43.425Z · LW(p) · GW(p)

Update: OK, now I agree. I encourage you to make a post on it.

comment by Xodarap · 2023-10-27T02:37:30.456Z · LW(p) · GW(p)

I think almost any large organization/company would have gone through a much more comprehensive fault-analysis and would have made many measurable improvements.

I claim YCombinator is a counter example [LW(p) · GW(p)].

(The existence of one counterexample obviously doesn't disagree with the "almost any" claim.)

comment by Ben Pace (Benito) · 2023-10-25T18:57:04.998Z · LW(p) · GW(p)

Notice, also, that I didn't actually say "The problem is fixed." I instead expressed doubt in the "not anything like enough" claim. I mused that it would be good to make some forecastable predictions. This was because I myself am unsure about what to think here. I have appreciated the discussions and attempted lesson-taking from the SBF disaster and I'm glad it's happening & I support doing more of it.

I feel like you're implicitly saying that anything has really changed! I am finding it hard to think of a world where less would have changed after a scandal this big.

I think your description of what happened with SBF is accurate, and I think that it is significantly less likely to happen again given how the SBF thing blew up and how people have reacted to it.

It is common the case that the exact same failure will not repeat itself. I think that the broader civilization does not have the skill of avoiding the same thing from happening (e.g. if a second covid came along I do not much expect that civilization would do more than 2x better the second time around, whereas I think one could obviously do 10x-30x better) and so the Effective Altruism movement is doing less dysfunctionally on this measure, in that there will probably not be another $8B crypto fraud. I think this is primarily just because many people have rightly lost trust in the Effective Altruism ecosystem and will not support it as much, but not because the underlying generators that were untrustworthy have been fixed.

I mused that it would be good to make some forecastable predictions.

I don't know how to operationalize things here! I think there are underlying generators that give a lot of power and respect to people without vetting them or caring about obvious ways in which they are low-integrity, un-principled, and unethical. Most Effective Altruism orgs are in the non-profit sector. I think most people involved will not have the opportunity to have their low ethical standards be displayed so undeniably as someone involved in a crypto scam, and I do not expect there is going to be consensus about other scandals in the way there is about this one. So I don't really know what to forecast, other than "a list of people you and I both consider high-integrity will stop participating in the Effective Altruism movement and ecosystem within the next 5 years", but that's a fairly indirect measure.

I think future catastrophes will also not look the same as past catastrophes because a lot of the underlying ecosystem has changed (number of people, amount of money, growth of AI, etc). That's another reason why it's hard to predict things.

Replies from: Yarrow Bouchard
comment by [deactivated] (Yarrow Bouchard) · 2023-11-05T08:44:14.184Z · LW(p) · GW(p)

I think there are underlying generators that give a lot of power and respect to people without vetting them or caring about obvious ways in which they are low-integrity, un-principled, and unethical. ... I think most people involved will not have the opportunity to have their low ethical standards be displayed so undeniably...

Would you like to say more about this? I'm curious if there are examples you can talk about publicly.

comment by Ben Pace (Benito) · 2023-10-25T19:07:10.195Z · LW(p) · GW(p)

I feel like this conversation is getting somewhat heated btw; if you like I'd be happy to have a phone or video call or just plan to meet up in person some day if you like. This is not an ask, just an offer just in case you'd like that.

Thanks. I don't feel much in the way of anger toward you personally, I'm primarily angry about the specific analysis of the situation that you are using here (and which I expect many others share). I still like you personally and respect a bunch of your writing on AI takeoff (and more). I don't currently feel like asking you to talk about this offline. (I'd be open to dialoguing more about it if you wanted that because I like discussing things in dialogue in general, but I'm not asking for that.)

comment by clone of saturn · 2023-10-26T02:25:07.164Z · LW(p) · GW(p)

I guess I should know better by now, but it still astonishes me that EAs can set such abysmally low standards for themselves while simultaneously representing themselves as dramatically more ethical than everyone else.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-26T03:00:52.133Z · LW(p) · GW(p)

...compared to what? Seriously what groups of people are you comparing to? Among the people in my extended network who see themselves as altruists, EAs seem to hold themselves and each other to the highest standards, and also seem to actually be more ethical than the rest. My extended network consists of tech company workers, academics, social justice types, and EAs. (Well and rationalists too, I'm not counting them.)

I agree this is a low bar in some absolute sense -- and there are definitely social movements in the world today (especially religious ones) that are better in both dimensions. There's a lot of room for improvement. And I very much support these criticisms and attempts at reform. But I'm just calling it like I see it here; it would be dishonest grandstanding of me to say the sentence Zvi wrote in the OP, at least not without giving additional context.

Replies from: clone of saturn
comment by clone of saturn · 2023-10-26T20:07:19.456Z · LW(p) · GW(p)

I think most organizations the size of EA have formal accountability mechanisms that attempt to investigate claims of fraud and abuse in some kind of more-or-less fair and transparent way. Of course, the actual fairness and effectiveness of such mechanisms can be debated, but at least the need for them is acknowledged. The attitude among EAs, on the other hand, seems to be that EAs are all too smart and good to have any real need for accountability.

comment by Xodarap · 2023-10-26T18:07:37.735Z · LW(p) · GW(p)

IMO the EA community has had a reckoning, a post-mortem, an update, etc. far more than most social or political movements would (and do) in response to similar misbehavior from a prominent member

As a reference point: fraud seems fairly common [EA · GW] in ycombinator backed companies, but I can't find any sort of postmortem, even about major things like uBiome where the founders are literally fugitives from the FBI.

It seems like you could tell a fairly compelling story that YC pushing founders to pursue risky strategies and flout rules is upstream of this level of fraudulent behavior, though I haven't investigated closely.

My guess is that they just kind of accept that their advice to founders is just going to backfire 1-2% of the time.

Replies from: Zvi, habryka4
comment by Zvi · 2023-10-26T18:26:35.874Z · LW(p) · GW(p)

I would be ecstatic to learn that only 2% of Y-Combinator companies that ever hit $100mm were engaged in serious fraud, and presume the true number is far higher.

And yes, YC does do that and Matt Levine frequently talks about the optimal amount of fraud (from the perspective of a VC) being not zero. For them, this is a feature, not a bug, up to a (very high) point.

I would hope we would feel differently, and also EA/rationality has had (checks notes) zero companies/people bigger than FTX/SBF unless you count any of Anthropic, OpenAI and DeepMind. In which case, well, other issues, and perhaps other types of fraud. 

Replies from: Xodarap
comment by Xodarap · 2023-10-27T02:45:37.318Z · LW(p) · GW(p)

Oh  yeah, just because it's a reference point that doesn't mean that we should copy them

comment by habryka (habryka4) · 2023-10-27T02:47:08.756Z · LW(p) · GW(p)

The total net-fraud from YC companies seems substantially smaller than the total net-fraud from EA efforts, and I think a lot more people have been involved with YC than EAs, so I don't really think this comparison goes through. 

Like EA has defrauded much more money than we've ever donated or built in terms of successful companies. Total non-fradulent valuations of YC companies are in the hundreds of billions, whereas total fraud is is maybe in the $1B range? That seems like a much more acceptable ratio of fraud to value produced.

Replies from: Xodarap
comment by Xodarap · 2023-12-02T02:15:38.361Z · LW(p) · GW(p)

EA has defrauded much more money than we've ever donated or built in terms of successful companies

 

FTX is missing $1.8B.  OpenPhil has donated $2.8B. 

Replies from: GeneSmith, habryka4
comment by GeneSmith · 2023-12-02T02:33:58.047Z · LW(p) · GW(p)

Also, I don't think it makes sense to characterize FTX's theft of customer funds as "EA defrauding people". SBF spent around $100 million on charitable causes and billions on VC investments, celebrity promotions, interest payments to crypto lenders, bahamas real estate, and a bunch of other random crap. And Alameda lost a bunch more buying shitcoins that crashed.

To say that EA defrauded people because FTX lost money is to say that of the 8 billion or whatever Alameda was short, the $100 million spent on EA priorities is somehow responsible for the other 7.9 billion. It just doesn't make any sense.

Replies from: habryka4
comment by habryka (habryka4) · 2023-12-02T03:37:58.173Z · LW(p) · GW(p)

I think it makes sense to say "EAs defrauded people". Sam was clearly an EA, and he mostly defrauded people in pursuit of an EA mission, which he thought was best optimized by increasing the valuation of FTX. 

Replies from: GeneSmith
comment by GeneSmith · 2023-12-02T04:23:08.645Z · LW(p) · GW(p)

Virtually no one in EA would have approved of the manner by which Sam sought to make FTX more valuable. So I guess I don't really see it as a failure of the EA movement or its morals. If someone is part of a movement and does something that the movement is explicitly against, is it the movements fault?

I also don't think people put their money in FTX because they wanted to help EA. They mostly put money in FTX because they believed it was a reputable exchange (whether that was because it was endorsed by Tom Brady or Steph Curry or any number of other people) and because they wanted to make money on Crypto.

Replies from: habryka4
comment by habryka (habryka4) · 2023-12-02T10:16:24.166Z · LW(p) · GW(p)

Virtually no one in EA would have approved of the manner by which Sam sought to make FTX more valuable.

I talked to many people about Sam doing shady things before FTX collapsed. Many people definitely endorsed those things. I don't think they endorsed stealing customer deposits, though honestly, my guess is a good chunk of people would have endorsed it if that wouldn't have resulted in everything exploding (and if it was just like a temporary dip into customer deposits).

I don't understand the second paragraph. Yes, Sam tricked people into depositing money onto his exchange, which he then used to fund a bunch of schemes, mostly motivated via EA and with the leadership team being substantially populated by EA people. Of course the customers didn't want to help EA, that's what made it a fraud. My guess is I am misunderstanding something you are trying to communicate.

Replies from: GeneSmith
comment by GeneSmith · 2023-12-03T01:35:44.187Z · LW(p) · GW(p)

A simpler way to phrase my question is "If you steal 8 billion and spend 7.9 billion on non-EA things, did you really do it for EA?"

Replies from: habryka4
comment by habryka (habryka4) · 2023-12-03T01:39:04.814Z · LW(p) · GW(p)

Well, it's more "you steal 8 billion dollars and gamble them on all-or-nothing bets where if you win you are planning to spend them on EA". I think that totally counts as EA.

Like, Sam spent that money in the hopes of growing FTX, and the was building FTX for earning to give reasons.

comment by habryka (habryka4) · 2023-12-02T03:36:32.248Z · LW(p) · GW(p)

That is an interesting number, however I think it's a bit unclear how to think about defrauding here. If you steal $1000 dollars, and then I sue you and get that money back, it's not like you "stole zero dollars". 

I agree it matters how much is recoverable, but most of the damage from FTX is not about the lost deposits specifically anyways, and I think the correct order of magnitude of the real costs here is probably greater than the money that was defrauded, though I think reasonable people can disagree on the number here. Similarly I think when you steal a $1000 bike from me, even if I get it back, the economic damage that you introduced is probably roughly on the order of the cost of the bike.

I also don't believe the $1.8B number. I've been following the reports around this very closely and every few weeks some news article claims vastly different fractions of funds have been recovered. While not a perfect estimator, I've been using the price at which FTX bankruptcy claims are trading at, which I think is currently at around 60%, suggesting more like $4B missing (claims of Alameda Research are trading at 15%, driving that number down further, but I don't know what fraction of the liabilities were Alameda claims).

Replies from: Xodarap
comment by Xodarap · 2023-12-02T17:36:16.208Z · LW(p) · GW(p)

Yep that's fair, there is some subjectivity here. I was hoping that the charges from SDNY would have a specific amount that Sam was alleged to have defrauded, but they don't seem to.

Regarding $4B missing: adding in Anthropic gets another $4B on the EA side of the ledger, and founders pledge another $1B. The value produced by Anthropic is questionable, and maybe negative of course, but I think by the strict definition of "donated or built in terms of successful companies" EA comes out ahead.

(And OpenAI gets another $80B, so if you count that then I think even the most aggressive definition of how much FTX defrauded is smaller. But obviously OAI's EA credentials are dubious.)

Replies from: habryka4
comment by habryka (habryka4) · 2023-12-02T18:41:47.073Z · LW(p) · GW(p)

Regarding $4B missing: adding in Anthropic gets another $4B on the EA side of the ledger, and founders pledge another $1B.

Well, I mean, I think making money off of building doomsday machines goes on the cost side of the ledger, but I do think it applies to the specific point I made above and I think that's fair. Anthropic is quite successful at a scale that is not that incomparable to the size of the FTX fraud.

Replies from: Xodarap
comment by Xodarap · 2023-12-03T00:13:30.352Z · LW(p) · GW(p)

We have Wildeford's Third Law: "Most >10 year forecasts are technically also AI forecasts".

We need a law like "Most statements about the value of EA are technically also AI forecasts".

comment by Ludwig Fahrbach (ludwig-fahrbach) · 2023-10-25T12:25:00.704Z · LW(p) · GW(p)

Using the Hare Psychopathy Checklist, SBF seems to be a psychopath. The Checklist consists of 20 items. Each item is scored on a three-point scale, with a rating of 0 if it does not apply at all, 1 if there is a partial match or mixed information, and 2 if there is a reasonably good match.  Here are my ratings. Most of my ratings follow quite directly from Zvi's account. I tried to err on the conservative side. 

Disclaimer: I am a layperson, not a psychiatrist, and have no relevant training in this area. The Wikipedia page warns against laypersons applying the Checklist. 

  • Glibness/superficial charm -- 1 point (2 points?)
  • Grandiose sense of self-worth -- 2 points 
  • Need for stimulation/proneness to boredom -- 2 points
  • Pathological lying -- 2 points
  • Conning/manipulative -- 2 points
  • Lack of remorse or guilt -- 2 points
  • Shallow affect -- 2 points
  • Callous/lack of empathy -- 2 points
  • Parasitic lifestyle -- 1 point (debatable)
  • Poor behavioral controls -- 1 point (debatable)
  • Promiscuous sexual behavior -- 0 points
  • Early behavior problems -- 0 points
  • Lack of realistic long-term goals -- 0 point (because of EA and utilitarianism, but debatable) 
  • Impulsivity -- 2 points
  • Irresponsibility -- 2 points
  • Failure to accept responsibility for own actions -- 2 points
  • Many short-term marital relationships - 0 points
  • Juvenile delinquency -- 0 points
  • Revocation of conditional release -- 0 points
  • Criminal versatility -- 2 points

Source: Wikipedia

 

The sum is 25 points. 25/40 counts as a case of psychopathy in the UK and "sometimes for research purposes", but not in the U.S. For comparison, Jeffrey Dahmer scored at 23/40 (Wikipedia), and the average male scores at 4/25 (The Psychopath Whisperer, Kent Kiehl p. 77. Kent Kiehl is an established scientist in the field and his book is highly recommended.) In the end, it is not so important whether SBF should be classified as a psychopath. Rather what is interesting is that many items on the Checklist match those in Zvi's account quite closely.

comment by johnhalstead · 2023-10-30T14:02:30.563Z · LW(p) · GW(p)

Thanks for taking the time to do this. I'm not really a fan of the way you approach writing up your thoughts here. The post seems high on snark, rhetoric and bare assertion, and low on clarity, reasoning transparency, and quality of reasoning. The piece feels like you are leaning on your reputation to make something like a political speech, which will get you credit among certain groups, rather than a reasoned argument designed to persuade anyone who doesn't already like you. For example, you say:

But at least the crazy kids are trying. At all. They get to be wrong, where most others are not even wrong.

Also, future children in another galaxy? Try our own children, here and now. People get fooled into thinking that ‘long term’ means some distant future. And yes, in some important senses, most of the potential value of humanity lies in its distant future.

But the dangers we aim to prevent, the benefits we hope to accrue? They are not some distant dream of a million years from now. They are for people alive today. You, yes you, and your loved ones and friends and if you have them children, are at risk of dying from AI or from a pandemic. Nor are these risks so improbable that one needs to cite future generations for them to be worthy causes.

I fight the possibility of AI killing everyone, not (only or even primarily) because of a long, long time from now in a galaxy far, far away. I fight so I and everyone else will have grandchildren, and so that those grandchildren will live. Here and now.

As I understand it, this is meant to be a critique of longtermism. The claims you have made here just seem to be asserting that longtermism is not true, without argument, which is what pretty much every journalist does now that every journalist doesn't like EA. But EA philosophers are field leaders in population ethics, and have published papers in leading journals on it, and you can't just dismiss it by saying things which look on the face of it to be inconsistent such as "Try our own children, here and now. People get fooled into thinking that ‘long term’ means some distant future. And yes, in some important senses, most of the potential value of humanity lies in its distant future. But the dangers we aim to prevent, the benefits we hope to accrue? They are not some distant dream of a million years from now." In what sense is the potential value of humanity in the future if the benefits are not in the future?

Similarly, on whether personhood intrinsically matters, you say:

"This attitude drives me bonkers. Yes, suffering is bad. It is the way we indicate to ourselves that things are bad. It sucks. Preventing it is a good idea. But when you think that suffering is the thing that matters, you confuse the map for the territory, the measure for the man, the math with reality. Combine that with all the other EA beliefs, set this as a maximalist goal, and you get… well, among other things, you get FTX. Also you get people worried about wild animal or electron suffering and who need hacks put in to not actively want to wipe out humanity.

If you do not love life, and you do not love people, or anything or anyone within the world, and instead wholly rely on a proxy metric? If you do not have Something to Protect? Oh no.

Again, you are just asserting here without argument and with lots of rhetoric that you believe personhood matters independently of subjective experience. I don't see why you would think this would convince anyone. A lot of EAs I know have actually read the philosophical literature on personal identity, and your claims seem highly non-serious by comparison. 

On Alameda, you say

"It was the flood of effective altruists out of the firm that was worrisome. It was the effective altruists who were the greedy ones, who were convinced they could make more money outside the firm, and that they had a moral obligation to do so. You know, for the common good. They proved themselves neither honest nor loyal. Neither was ‘part of their utility function.’

I agree that setting up Alameda was a very bad idea for lots of reasons. However, you claim here that the people who joined Alameda aside from Sam weren't actually doing it for the common good. From my knowledge, this is false - they did honestly believe they were doing it for the common good and were going to give the money away. Do you have evidence that they didn't actually donate the money they made? 

When you say they proved that they were not loyal, are you saying they should have been loyal to SBF, or that they should have been loyal to Jane Street? Both claims seem false. Even if they should have stayed at Jane Street, loyalty is not a good reason to do so, and they shouldn't have been loyal to SBF because he was a psychopath. 

These general points aside, I agree that management of bad actors and emphasis on rule following are extremely important and should receive much more emphasis than they do. 

Replies from: Zvi
comment by Zvi · 2023-10-31T18:34:17.679Z · LW(p) · GW(p)

This seems to be misunderstanding several points I was attempting to make so I'll clear those up here. Apologies if I gave the wrong idea.

  1. On longtermism I was responding to Lewis' critique, saying that you do not need full longtermism to care about the issues longtermists care about, that there were also (medium term?) highly valuable issues at stake that would already be sufficient to care about such matters. It was not intended as an assertion that longtermism is false, nor do I believe that. 
  2. I am asserting there that I believe that things other than subjective experience of pleasure/suffering matter, and that I think the opposite position is nuts both philosophically and in terms of it causing terrible outcomes. I don't think this requires belief in personhood mattering per se, although I would indeed say that it matters. And when people say 'I have read the philosophical literature on this and that's why nothing you think matters matters, why haven't you done your homework'... well, honestly, that's basically why I almost never talk philosophy online and most other people don't either, and I think that sucks a lot. But if you want to know what's behind that on a philosophical level? I mean, I've written quite a lot of words both here and in various places. But I agree that this was not intended to turn someone who had read 10 philosophy books and bought Benthamite Utilitarianism into switching.
  3. On Alameda, I was saying this from the perspective of Jane Street Capital. Sorry if that was unclear. As in, Lewis said JS looked at EAs suspiciously for not being greedy. Whereas I said no, that's false, EAs got looked at suspiciously because they left in the way they did. Nor is this claiming they were not doing it for the common good - it is saying that from the perspective of JSC, them saying it was 'for the common good' doesn't change anything, even if true. My guess, as is implied elsewhere, is that the EAs did believe this consciously. As for whether they 'should have been' loyal to JSC, my answer is they shouldn't have stayed out of loyalty, but they should have left in a more cooperative fashion.
Replies from: johnhalstead
comment by johnhalstead · 2023-10-31T21:04:39.419Z · LW(p) · GW(p)

Hello, 

  1. It seems I misunderstood sorry
  2. My point in raising the philosophy literature was that you seemed to be professing anger at the idea that subjective experience is all that matters morally - it drives you 'bonkers' and is 'nuts'. I was saying that people with a lot more expertise than you in philosophy think it is plausible and you haven't presented any arguments, so I don't think it should drive you bonkers. I think the default position would be to update a bit towards that view and to not think it is bonkers. 
    1. Similarly, if I wrote a piece saying that a particular reasonably popular view on quantitative trading was bonkers, I might reasonably get called out by someone (like you) who has more expertise than me in it. I also don't think me saying "this is why I never have online discussions with people who have expertise in quantitative trading" should reassure you. Your response confirms my sense that much of the piece was not meant to persuade but more to use your own reputation to decrease the status of various opinions among your readers without offering any arguments.
    2. In the passage I quote, you also make a couple of inconsistent statements. You say "Yes, suffering is bad. It is the way we indicate to ourselves that things are bad. It sucks. Preventing it is a good idea". Then you say "Also you get people worried about wild animal or electron suffering". If you think suffering is bad, then why would you not think wild animals can suffer? Or if you do think that they can suffer, then aren't you committed to the view that preventing wild animal suffering is good? Same for digital suffering. I think mistakes like this should lead to some humility about labelling reasonably popular philosophical views as nuts without argument. 
    3. I also don't understand why you think the view that subjective wellbeing is all that matters implies you get FTX. FTX seems to have stemmed from naive consequentialism, which is distinct from the view that subjective experience is all that matters. Indeed, FTX was ex ante and ex post very very bad from the point of view of a worldview which says that subjective experience is all that matters (hedonistic total utilitarianism).This dynamic has happened repeatedly in various different places since the fall of FTX. 'Here is some idiosyncratic feature f of FTX, FTX is bad, therefore f is bad' is not a good argument but keeps coming up, cf arguments that FTX wasn't focused on progress, wasn't democratic, they believed in the total view, they think preventing existential risks is a good idea, etc. Again, I also don't see an argument for why you think this, you just assert it. 
  3. Can you say more about how they could have left in a more cooperative fashion? My default take would be that, as long as you give notice, from the point of view of common sense morality, there is nothing wrong with leaving a company. In the case of Jane Street, I think from the point of view of common sense morality, since the social benefits of the company are small, most people would think that a dozen people leaving at once just doesn't matter at all. It might be different if it was a refugee charity or something. Is there some detail about how they left that I am missing?
    1. Why do you think they weren't honest?
    2. This passage: "It was the effective altruists who were the greedy ones, who were convinced they could make more money outside the firm, and that they had a moral obligation to do so. You know, for the common good" strongly reads as saying the EAs who left Jane Street for Alameda did so out of greed. If you didn't intend this, I would suggest editing the main text to clarify that when you said "it was the effective altruists who were the greedy ones" you meant they were actually doing it for the common good.  A lot of the people who read this forum will know a lot of the people who left to join Alameda, so if you are unintentionally calling them greedy, dishonest and disloyal, that could be quite bad. Unless you intend to do that, in which case fair enough.
comment by habryka (habryka4) · 2023-10-29T18:12:28.254Z · LW(p) · GW(p)

Promoted to curated: I am generally quite hesitant to curate posts like this, since I like the focus of LessWrong to be on content that is generally timeless and is about deeper patterns in the world and the art of reasoning. However, I do think that despite this post being in some sense about recent events, that it definitely has a lot of content that I think is broadly relevant to a lot of people, and also, as recent events go, the whole FTX and SBF thing is quite high up there in terms of probably still being relevant in 10 years or so.

This post feels like the best concrete story that weaves the different threads that made up FTX into something that makes coherent sense to me. It situates both the recklessness and the whole EA thing into a shared story, as well as all the conflict-of-interest and dating stuff that seems like it played a large role. And for all of those I feel like I got closer to take away realistic lessons from FTX, both via concrete suggestions in the post and via combining the broader frame proposed in this post with ideas in other posts.

Thanks a lot for writing this Zvi!

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-24T23:09:14.907Z · LW(p) · GW(p)

it died away quickly,


Citation needed? In my experience it seems like the change has been permanent.

Replies from: Zvi
comment by Zvi · 2023-10-25T12:46:46.029Z · LW(p) · GW(p)

I will report back after EAG Boston if it updates me, but this has not been my experience at all, and I am curious what persistent changes you believe I should have noticed, other than adapting to the new funding situation.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-25T13:59:54.471Z · LW(p) · GW(p)

Well, we probably aren't talking to the same people. But (a) the people I know haven't regressed to thinking SBF was good actually. They still think that was a huge disaster, we were wrong to trust him and work for him, we were gullible, too naively consequentialist, etc. And (b) they still seem to have made a generic update towards common-sense morality and a bigger generic update towards integrity/honesty being important. As a result, I claim (c) that if something like SBF started to happen again, many people would like antibodies speak up and crush it before it got nearly so big. In fact now that I mention it I think there are even two or three examples of this happening already (immune system reactions).

To be clear I'm not confident in any of this and I'm still worried, especially about the naive consequentialism.

My original comment was "citation needed," could you at least say more about why you think the change died away quickly? Maybe give some examples of bad behavior (possibly anonymized if you like) that happened pre-SBF, stopped happening after SBF, and then started up again?

Replies from: habryka4, Zvi
comment by habryka (habryka4) · 2023-10-25T16:48:08.330Z · LW(p) · GW(p)

a bigger generic update towards integrity/honesty being important.

This seems backwards to me. For example, Open Phil as a grantmaker substantially increased the degree to which they are concerned about PR and how much they should overall obfuscate or distort, with the reasoning that FTX’s collapse substantially increased the risk that various people would take opportunities to attack EA-affiliated things, and also clearly demonstrated that PR risks are real and have really bad consequences.

In-general I've mostly seen people update that EA should now try harder to not look bad and to me more concerned about our reputation in a way I think quite straightforwardly trades off against honesty and integrity.

To be clear, this is not universal, and some people I know have updated in ways that put more emphasis on integrity, but I think most of the update is backwards.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-25T18:29:53.044Z · LW(p) · GW(p)

OK, interesting, thanks. I don't have much of an opinion about this myself but I agree that insofar as the update is mostly towards PRishness and not genuine integrity, that's bad. I'd be curious to hear more about it if you want to talk about it.

Replies from: habryka4
comment by habryka (habryka4) · 2023-10-25T18:37:39.469Z · LW(p) · GW(p)

Will send you a link to some drafts I've been writing. Hopefully I'll publish some things eventually.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-10-29T18:40:00.064Z · LW(p) · GW(p)

Lmk if I can help with that, via dialogues, co-writing, or something else

comment by Zvi · 2023-10-25T19:31:51.348Z · LW(p) · GW(p)

What I meant was that I saw talk of need for systemic/philosophical change and to update, that talk died down, and what I see now does not seem so different from what I saw then. As Ben points out, there has been little turnover. I don't see a difference in epistemics of discussions. I don't see examples of decisions being made using better theories. And so on. 

Concretely recently: Reaction to Elizabeth's post seemed what I would have expected in 2021, from both LW and EA Forum. The whole nonlinear thing was only exposed after Ben Pace put infinite hours into it, otherwise they were plausibly in process of rooting a lot of EA. Etc. My attempts to get various ideas across don't feel like they're getting different reactions from EAs than I would have expected in 2021. 

Yes, people have realized SBF the particular person was bad, but they have not done much to avoid making more SBFs that I can see? Or to guard against such people if they don't do exactly the same thing next time?

Situation with common sense morality and honesty seem not to have changed much from where I sit, and note e.g. that Oliver/Ben who interact with them more seem to basically despair on this front. 

Replies from: daniel-kokotajlo, Yarrow Bouchard
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-25T21:27:45.908Z · LW(p) · GW(p)

I'd be interested in having a call with you + Ben Pace + maybe Habryka or whoever else you like, to discuss in a higher-bandwidth way, if you like.

Replies from: Benito
comment by Ben Pace (Benito) · 2023-10-25T22:24:14.888Z · LW(p) · GW(p)

Yeah actually I would be interested in having a dialogue on this, with yourself and any of the other folks you mention.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-31T13:42:28.343Z · LW(p) · GW(p)

Sounds good! Sign me up! :)

comment by [deactivated] (Yarrow Bouchard) · 2023-11-05T08:32:12.000Z · LW(p) · GW(p)

Reaction to Elizabeth's post seemed what I would have expected in 2021, from both LW and EA Forum. ... My attempts to get various ideas across don't feel like they're getting different reactions from EAs than I would have expected in 2021.

Would you mind explaining both of these things? I'm not very plugged in to this sort of thing.

Situation with common sense morality and honesty seem not to have changed much from where I sit, and note e.g. that Oliver/Ben who interact with them more seem to basically despair on this front.

Also curious to understand more about what this means.

comment by jacobjacob · 2023-10-25T17:38:37.941Z · LW(p) · GW(p)

that the misalignment issues involved would have almost certainly destroyed us all, or all that we care about.

How?

Replies from: Zvi, michaelwheatley, MichaelStJules
comment by Zvi · 2023-10-25T19:32:38.504Z · LW(p) · GW(p)

I have an answer but I think it would be better to see how others answer this, at least first?

comment by michaelwheatley · 2023-10-31T04:24:05.429Z · LW(p) · GW(p)

He explained it in the Tyler Cohen interview. After taking over the world, Sam would do exactly as promised: continue going double-or-nothing until it came up nothing.

comment by MichaelStJules · 2023-10-26T08:20:16.961Z · LW(p) · GW(p)

Maximizing just for expected total pleasure, as a risk neutral classical utilitarian? Maybe being okay with killing everyone or letting everyone die (from AGI, say), as long as the expected payoff in total pleasure is high enough?

I don't really see a very plausible path for SBF to have ended up with enough power to do this, though. Money only buys you so much, against the US government and military, unless you can take them over. And I doubt SBF would destroy us with AGI if others weren't already going to.

comment by Lukas_Gloor · 2023-10-25T14:54:06.376Z · LW(p) · GW(p)

Great review and summary! 

I followed the aftermath of FTX and the trial quite closely and I agree with your takes. 

Also +1 to mentioning the suspiciousness around Alameda's dealings with tether. It's weird that this doesn't get talked about much, so far.

On the parts of your post that contain criticism of EA: 

We are taking many of the brightest young people. We are telling them to orient themselves as utility maximizers with scope sensitivity, willing to deploy instrumental convergence. Taught by modern overprotective society to look for rules they can follow so that they can be blameless good people, they are offered a set of rules that tells them to plan their whole lives around sacrifices on an alter, with no limit to the demand for such sacrifices. And then, in addition to telling them to in turn recruit more people to and raise more money for the cause, we point them into the places they can earn the best ‘career capital’ or money or ‘do the most good,’ which more often than not have structures that systematically destroy these people’s souls. 
SBF was a special case. He among other things, and in his own words, did not have a soul to begin with. But various versions of this sort of thing are going to keep happening, if we do not learn to ground ourselves in real (virtue?!) ethics, in love of the world and its people.

[...]

Was there a reckoning, a post-mortem, an update, for those who need one? Somewhat. Not anything like enough. There was a rush to deontology that died away quickly, mostly retreating back into its special enclave of veganism. There were general recriminations. There were lots of explicit statements that no, of course we did not mean that and of course we do not endorse any of that, no one should be doing any of that. And yes, I think everyone means it. But it’s based on, essentially, unprincipled hacks on top of the system, rather than fixing the root problem, and the smartest kids in the world are going to keep noticing this. We need to instead dig into the root causes, to design systems and find ways of being that do not need such hacks, while still preserving what makes such real efforts to seek truth and change the world for the better special in the first place.

Interesting take! I'm curious to follow the discussion around this that your post inspired. 

I wish someone who is much better than me at writing things up in an intelligible and convincing fashion would make a post with some of the points I made here [EA · GW]. In particular, I would like to see more EAs acknowledge that longtermism isn't true in any direct sense, but rather, that it's indirectly about the preferences of us as altruists (see the section, "Caring about the future: a flowchart"). Relatedly, EAs would probably be less fanatic about their particular brand of maximizing morality if they agreed that "What's the right maximizing morality?" has several defensible answers, so those of us who make maximizing morality a part of their life goals shouldn't feel like they'd have moral realism on their side when they consider overruling other people's life goals. Respecting other people's life goals, even if they don't agree with your maximizing morality, is an ethical principle that's at least as compelling/justified from a universalizing, altruistic stance, than any particular brand of maximizing consequentialism.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-25T02:25:37.157Z · LW(p) · GW(p)

Since everything these days is about AI, consider SBF as a misaligned AGI (or NGI?).


Cute analogy. I love it.

comment by denyeverywhere (daniel-radetsky) · 2023-11-06T09:52:41.984Z · LW(p) · GW(p)

We are taking many of the brightest young people. We are telling them to orient themselves as utility maximizers with scope sensitivity, willing to deploy instrumental convergence. Taught by modern overprotective society to look for rules they can follow so that they can be blameless good people, they are offered a set of rules that tells them to plan their whole lives around sacrifices on an alter, with no limit to the demand for such sacrifices. And then, in addition to telling them to in turn recruit more people to and raise more money for the cause, we point them into the places they can earn the best ‘career capital’ or money or ‘do the most good,’ which more often than not have structures that systematically destroy these people’s souls.

 

See, I don't think that's the problem. Or at least not the only one. And maybe not the important one. I think the issue was more of a failure-to-be-suspicious-of-too-good-to-be-true type of error, which seems common in these kinds of cases.

When Madoff blew up, my dad told me that what was going on was that lots of the people investing with Madoff knew the returns he was promising weren't possible legitimately. So they assumed he must be screwing somebody, and this must be somebody else other than them. Unfortunately it turned out he was also screwing them. Whoops!

When Sam came along, people could have looked and said this seems like an unreasonably positive windfall. Therefore, I should be suspicious. The more unreasonably generous his donations, the more I should want to know more, to double-check everything. But they didn't. Plenty of other people have made this mistake before, so I'm not being too critical here, but that's what it takes. You have to think: this man wouldn't offer me free candy just to get in his unmarked van, that doesn't make sense. I wouldn't give anyone candy for that. What's going on here?

I've seen several reports of people saying that they in fact believed at the time that Sam must have been doing something slightly sketchy, and that maybe that was not such a good thing, but ultimately concluded that it wasn't worth worrying about. The lesson we should have taken from Madoff is that it's always worth worrying about. It could always be worse than your initial guess. People must not have seriously asked themselves "What's the worst thing that could happen?" because the obvious answer would be "He's another Madoff."

Personally, if you had asked me what was going to happen with FTX in early 2022, I'd have said the same thing as Madoff. But I'm super-suspicious of everything to do with crypto so maybe this doesn't count as extraordinary prescience. I'd probably have said the same thing in early 2021 too.

comment by [deactivated] (Yarrow Bouchard) · 2023-11-05T08:05:49.404Z · LW(p) · GW(p)

This post was enjoyable as heck to read. Thanks for taking the time to write it.


I guess I'm of two minds about the effective altruism of it all. 

One one hand: It kinda just seems like a bunch of self-identified effective altruists, who were well-meaning but perhaps naive, got blinded by money and suckered into servitude by a smart and charismatic leader who was successful at scamming a lot of people. Maybe there isn't a big lesson about EA philosophy or the EA subculture. Maybe this is just like any other cult leader or con artist or corrupt CEO manipulating a lot of smart, sane, good-hearted people.

On the other hand: Maybe there's something a bit cult-y about EA subculture and something about EA philosophy's rejection of common sense and folk morality that made people associated with effective altruism extra susceptible to the Sam Bankman-Fried mind virus. Maybe people in the EA movement need more common sense and more folk morality. Maybe EA people also need more intellectual humility and more healthy skepticism of EA, such that they are more willing to balance EA philosophy with common sense and balance utilitarian ethics with folk morality. 

I'm empathetic to the people who got taken in by SBF and I don't judge them harshly. I've been scammed before. I've been overzealous about non-common sense ideas before. A guy who seems really good at making money trading crypto and wants to donate it all to buy anti-malarial bed nets? On the face of it, what's wrong with that? 

Maybe the more interesting question is: why didn't the exodus of the initial management team at Alameda result in SBF's reputation getting destroyed in the EA community? Did the people who left not speak up enough to make that happen? Were they silenced by fear of reprisal? Were they too burnt out and defeated to do much after leaving? Were they embarrassed that Sam manipulated them? 

Or did others, especially leaders in the EA community, not listen to them? Did they get blinded by dollar signs in their eyes? Did they find it easier to shoo away inconvenient allegations?

comment by jan betley (jan-betley) · 2023-10-30T07:49:48.832Z · LW(p) · GW(p)

A shame Sam didn't read this [LW · GW]:

But if you are running on corrupted hardware, then the reflective observation that it seems like a righteous and altruistic act to seize power for yourself—this seeming may not be be much evidence for the proposition that seizing power is in fact the action that will most benefit the tribe.

comment by followthesilence · 2023-10-30T04:15:27.079Z · LW(p) · GW(p)

Great review. Brilliant excerpts, excellent analysis. My only quibble would be:

What Michael Lewis is not is for sale.

What leads you to this conclusion? I don't know much about Lewis, but based on his prior books I would've said one thing he is not is stupid, or bad at understanding people. I feel you have to be inconceivably ignorant to stand by SBF and suggest he probably didn't intentionally commit fraud, particularly in light of all the stories presented in the book. 

Bizarre statements like "There’s still an SBF-shaped hole in the world that needs filling" have me speechless with no good explanation other than Lewis was on the take.

Replies from: Benito
comment by Ben Pace (Benito) · 2023-10-30T04:43:04.198Z · LW(p) · GW(p)

If you think it's quite likely, IMO it would be worth setting up a manifold market on whether evidence will come out through Bankman-Fried's prosecution showing this to be the case any time over the next year. You could get a lot of mana if this comes out!

(While we're discussing it, I'll say that I currently assign <10% to this probability, the Lewis doesn't seem sleazy enough to publish such a positive book while Bankman-Fried is on trial for fraud and not mention that he's getting money from the guy.)

Replies from: followthesilence
comment by followthesilence · 2023-10-30T04:58:08.380Z · LW(p) · GW(p)

No idea how likely it is. I'm not going to create a market but welcome someone else doing so. I agree the likelihood "evidence will come out [...] over the next year" is <10%. That is not the same as the likelihood it happened, which I'd put at >10%. More than anything, I just cannot reconcile my former conception of Michael Lewis with his current form as a SBF shill in the face of a mountain of evidence that SBF committed fraud. I asked the question because Zvi seems smarter than me, especially on this issue, and I'm seeking reasons to believe Lewis is just confused or wildly mistaken rather than succumbing to ulterior motives.

comment by Vaniver · 2023-10-25T19:41:40.133Z · LW(p) · GW(p)

But even after that, Caroline didn’t turn on Sam yet.

Constance, presumably?

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-25T05:20:24.176Z · LW(p) · GW(p)

My expectation is that in the unlikely scenario that this attempted takeoff had fully succeeded, and SBF had gained sufficient affordances and capabilities thereby, that the misalignment issues involved would have almost certainly destroyed us all, or all that we care about. Luckily, that did not come to pass.


Yep. (Well, not "almost certainly" but I'd say "probably")

comment by Lukas Finnveden (Lanrian) · 2023-10-25T05:05:08.976Z · LW(p) · GW(p)

But even after that, Caroline didn’t turn on Sam yet.

This should say Constance.

Replies from: Zvi
comment by Zvi · 2023-10-25T12:44:57.451Z · LW(p) · GW(p)

Yep. I'm making small fixes to the Substack version as I go, but there have been like 20 tiny ones so I'm waiting to update WP/LW all at once.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-10-29T18:42:09.868Z · LW(p) · GW(p)

If you're using auto-crosspost, there's now an "update from rss" button. You have to trigger it manually but it's two clicks instead of having to port it all over

comment by lc · 2023-10-25T05:03:07.297Z · LW(p) · GW(p)

Just because he didn’t feel the emotion didn’t mean he couldn’t convey it. He’d started with his facial expressions. He practiced forcing his mouth and eyes to move in ways they didn’t naturally.

Unsure if it's for the same reasons, but Adolf Hitler also did this; he would deliberately practice making faces in front of a mirror to help finetune his speeches and interpersonal interactions.

Replies from: ryan_b
comment by ryan_b · 2023-10-25T22:30:28.345Z · LW(p) · GW(p)

While unusual for interpersonal stuff, practicing your speeches in every detail down to facial expressions and gestures is simply correct if you want to be good at making them.

comment by Robbo · 2023-10-24T21:49:34.578Z · LW(p) · GW(p)

There was a rush to deontology that died away quickly, mostly retreating back into its special enclave of veganism.

Can you explain what you mean by the second half of that sentence?

Replies from: Zvi
comment by Zvi · 2023-10-24T22:09:15.444Z · LW(p) · GW(p)

Vegans believe that they should follow a deontological rule, to never eat meat, rather than weighing the costs and benefits of individual food choices. They don't consume meat even when it is expensive (in various senses) to not do so. And they advocate for others to commit to doing likewise.

Whereas EA thinking in other areas instead says to do the math.

Replies from: Natália Mendonça, paul-tiplady, MichaelStJules
comment by Natália (Natália Mendonça) · 2023-10-31T02:12:34.427Z · LW(p) · GW(p)

Minor nit: following strict rules without weighing the costs and benefits each time could be motivated by rule utilitarianism, not only by deontology.  It could also be motivated by act utilitarianism, if you deem that weighing the costs and benefits every single time would not be worth it. (Though I don't think EA veganism is often motivated by act utilitarianism).

comment by Paul Tiplady (paul-tiplady) · 2023-10-28T19:39:24.302Z · LW(p) · GW(p)

I’m a vegetarian and I consider my policy of not frequently recalculating the cost/benefit of eating meat to be an application of a rule in two-level utilitarianism, not a deontological rule. (I do pressure test the calculation periodically.)

Also I will note you are making some pretty strong generalizations here. I know vegans who cheat, vegans who are flexible, vegans who are strict.

comment by MichaelStJules · 2023-10-26T10:02:14.793Z · LW(p) · GW(p)

I think a small share of EAs would do the math before deciding whether or not to commit fraud or murder, or otherwise cause/risk involuntary harm to other people, and instead just rule it out immediately or never consider such options in the first place. Maybe that's a low bar, because the math is too obvious to do?

What other important ways would you want (or make sense for) EAs to be more deontological? More commitment to transparency and against PR?

Replies from: Benito
comment by Ben Pace (Benito) · 2023-10-26T22:07:12.970Z · LW(p) · GW(p)

I think a small share of EAs would do the math before deciding whether or not to commit fraud or murder, or otherwise cause/risk involuntary harm to other people, and instead just rule it out immediately or never consider such options in the first place.

Ah come on. I am tempted to say "You're not a true Effective Altruist unless you've at least done the math." Rigorously questioning the foundations of strong moral rules like this one is surely a central part of being an ethical person. 

Countries do murder all of the time in wars and by police. Should you be pushing really hard to get to an equilibrium where that isn't okay? There are boundaries here and you actually have to figure out which ones are right and which ones are wrong. What are the results of such policies? Do they net improve or hurt people? These are important questions to ask and do factor into my decisions, at least.

Many countries have very different laws around what level of violence you're allowed to use to protect yourself from someone entering your property (like a robber). You can't just defer to "no murder", I do think you have to figure out for yourself what's right and wrong in this scenario. And there's math involved, as well as deontology.

Replies from: MichaelStJules
comment by MichaelStJules · 2023-10-27T01:46:50.451Z · LW(p) · GW(p)

I think that's true, but also pretty much the same as what many or most veg or reducetarian EAs did when they decided what diet to follow (and other non-food animal products to avoid), including what exceptions to allow. If the consideration of why not to murder counts as involving math, so does veganism for many or most EAs, contrary to Zvi's claim. Maybe some considered too few options or possible exceptions ahead of time, but that doesn't mean they didn't do any math.

This is also basically how I imagine rule consequentialism to work: you decide what rules to follow ahead of time, including prespecified exceptions, based on math. And then you follow the rules. You don't redo the math for each somewhat unique decision you might face, except possibly very big infrequent decisions, like your career or big donations. You don't change your rule or make a new exception right in the situation where the rule would apply, e.g. a vegan at a restaurant, someone's house or a grocery store. If you change or break your rules too easily, you undermine your own ability to follow rules you set for yourself.

But also, EA is compatible with the impermissibility of instrumental harm regardless of how the math turns out (although I have almost no sympathy for absolutist deontological views). AFAIK, deontologists, including absolutist deontologists, can defend killing in self-defense without math and also think it's better to do more good than less, all else equal.

comment by Review Bot · 2024-07-20T20:42:10.439Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

comment by Ben Smith (ben-smith) · 2023-11-11T15:45:18.163Z · LW(p) · GW(p)

If Ray eventually found that the money was "still there", doesn't this make Sam right that "the money was really all there, or close to it" and "if he hadn’t declared bankruptcy it would all have worked out"?

Ray kept searching, Ray kept finding.

That would raise the amount collected to $9.3 billion—even before anyone asked CZ for the $2.275 billion he’d taken out of FTX. Ray was inching toward an answer to the question I’d been asking from the day of the collapse: Where did all that money go? The answer was: nowhere. It was still there.

comment by mdog · 2023-10-31T21:36:44.149Z · LW(p) · GW(p)

I wasn't positive what I'd think of Going Infinite going in -- Lewis is obviously a great writer, but I've disliked more of his books than I've liked. I ended up reading it twice. It was interesting and somewhat fresh.

I think this review takes the approach that could be seen coming from Lewis's surprising take on SBF -- to act like Lewis was even more sympathetic than he was. Though the big stuff is a fair criticism (Lewis thinks SBF is more dumb than conniving), a lot of the reporting on SBF's perspective doesn't seem fair to take as an endorsement of the perspective, and I think that Lewis's take, even when likely wrong, is often way more interesting than pointing out that SBF is indeed a thoughtless asshole, etc.

comment by mdog · 2023-10-31T21:27:45.874Z · LW(p) · GW(p)

That is not what profits mean. Your expenses count. Your payroll counts. This is absurd.

This seems like a normal use of profits for trading profits. It's, as you point out, not the firm's profit.

comment by M Ls (m-ls) · 2023-10-31T09:51:12.054Z · LW(p) · GW(p)

we need to get better at policing narcissism and psychopathy, arguing about anything else is a distraction

comment by infinitespaces · 2023-10-25T17:22:10.973Z · LW(p) · GW(p)

I’m going to need an entire separate book, which is mostly just John Ray quotes.