Posts

Lighthaven Sequences Reading Group #13 (Tuesday 12/03) 2024-11-27T07:23:29.196Z
What are the good rationality films? 2024-11-20T06:04:56.757Z
Lighthaven Sequences Reading Group #12 (Tuesday 11/26) 2024-11-20T04:44:56.303Z
Lighthaven Sequences Reading Group #11 (Tuesday 11/19) 2024-11-13T05:33:07.928Z
Lighthaven Sequences Reading Group #10 (Tuesday 11/12) 2024-11-06T03:43:11.314Z
Lighthaven Sequences Reading Group #9 (Tuesday 11/05) 2024-10-31T21:34:15.000Z
Lighthaven Sequences Reading Group #8 (Tuesday 10/29) 2024-10-27T23:55:08.351Z
Lighthaven Sequences Reading Group #7 (Tuesday 10/22) 2024-10-16T05:02:18.491Z
Lighthaven Sequences Reading Group #6 (Tuesday 10/15) 2024-10-10T20:34:10.548Z
Lighthaven Sequences Reading Group #5 (Tuesday 10/08) 2024-10-02T02:57:58.908Z
2024 Petrov Day Retrospective 2024-09-28T21:30:14.952Z
Petrov Day Ceremony (TODAY) 2024-09-26T08:34:06.965Z
[Completed] The 2024 Petrov Day Scenario 2024-09-26T08:08:32.495Z
Lighthaven Sequences Reading Group #4 (Tuesday 10/01) 2024-09-25T05:48:00.099Z
Lighthaven Sequences Reading Group #3 (Tuesday 09/24) 2024-09-22T02:24:55.613Z
Lighthaven Sequences Reading Group #2 (Tuesday 09/17) 2024-09-08T21:23:27.490Z
First Lighthaven Sequences Reading Group 2024-08-28T04:56:53.432Z
Thiel on AI & Racing with China 2024-08-20T03:19:18.966Z
Extended Interview with Zhukeepa on Religion 2024-08-18T03:19:05.625Z
Debate: Is it ethical to work at AI capabilities companies? 2024-08-14T00:18:38.846Z
Debate: Get a college degree? 2024-08-12T22:23:34.744Z
LessOnline Festival Updates Thread 2024-04-18T21:55:08.003Z
LessOnline (May 31—June 2, Berkeley, CA) 2024-03-26T02:34:00.000Z
Vote on Anthropic Topics to Discuss 2024-03-06T19:43:47.194Z
Voting Results for the 2022 Review 2024-02-02T20:34:59.768Z
Vote on worthwhile OpenAI topics to discuss 2023-11-21T00:03:03.898Z
Vote on Interesting Disagreements 2023-11-07T21:35:00.270Z
Online Dialogues Party — Sunday 5th November 2023-10-27T02:41:00.506Z
More or Fewer Fights over Principles and Values? 2023-10-15T21:35:31.834Z
Dishonorable Gossip and Going Crazy 2023-10-14T04:00:35.591Z
Announcing Dialogues 2023-10-07T02:57:39.005Z
Closing Notes on Nonlinear Investigation 2023-09-15T22:44:58.488Z
Sharing Information About Nonlinear 2023-09-07T06:51:11.846Z
A report about LessWrong karma volatility from a different universe 2023-04-01T21:48:32.503Z
Shutting Down the Lightcone Offices 2023-03-14T22:47:51.539Z
Open & Welcome Thread — February 2023 2023-02-15T19:58:00.435Z
Rationalist Town Hall: FTX Fallout Edition (RSVP Required) 2022-11-23T01:38:25.516Z
LessWrong Has Agree/Disagree Voting On All New Comment Threads 2022-06-24T00:43:17.136Z
Announcing the LessWrong Curated Podcast 2022-06-22T22:16:58.170Z
Good Heart Week Is Over! 2022-04-08T06:43:46.754Z
Good Heart Week: Extending the Experiment 2022-04-02T07:13:48.353Z
April 2022 Welcome & Open Thread 2022-04-02T03:46:13.743Z
Replacing Karma with Good Heart Tokens (Worth $1!) 2022-04-01T09:31:34.332Z
12 interesting things I learned studying the discovery of nature's laws 2022-02-19T23:39:47.841Z
Ben Pace's Controversial Picks for the 2020 Review 2021-12-27T18:25:30.417Z
Book Launch: The Engines of Cognition 2021-12-21T07:24:45.170Z
An Idea for a More Communal Petrov Day in 2022 2021-10-21T21:51:15.270Z
Facebook is Simulacra Level 3, Andreessen is Level 4 2021-04-28T17:38:03.981Z
Against "Context-Free Integrity" 2021-04-14T08:20:44.368Z
"Taking your environment as object" vs "Being subject to your environment" 2021-04-11T22:47:04.978Z

Comments

Comment by Ben Pace (Benito) on Sapphire Shorts · 2024-12-07T08:35:58.054Z · LW · GW

...did you try to 'induce psychosis' in yourself by taking psychedelics? If so I would also ask about how much you took and if you had any severe or long-lasting consequences.

Comment by Ben Pace (Benito) on Book Review: Going Infinite · 2024-12-07T08:02:38.245Z · LW · GW

+9. This is at times hilarious, at times upsetting story, of how a man gained a massive amount of power and built a corrupt empire. It's a psychological study, as well as a tale of a crime, hand-in-hand with a lot of naive ideologues.

I think it is worthwhile for understanding a lot about how the world currently works, including understanding individuals with great potential for harm, the crooked cryptocurrency industry, and the sorts of nerds in the world who falsely act in the name of good.

I don't believe that all the details here are fully accurate, but that enough of it is to be a worthy story to read.

(It is personally upsetting to me that the person who was ~King over me and everyone I knew professionally and personally turned out to be such a spiritually-hollow crook, and to know how close I am to being in a world where his reign continues.)

Comment by Ben Pace (Benito) on Aiming for Convergence Is Like Discouraging Betting · 2024-12-05T22:27:22.407Z · LW · GW

I think that someone reading this would be challenged to figure out for themselves what assumptions they think are justified in good discourse, and would fix some possible bad advice they took from reading Sabien's post. I give this a +4.

(Below is a not especially focused discussion of some points raised; perhaps after I've done more reviews I can come back and tighten this up.)

Sabien's Fifth guideline is "Aim for convergence on truth, and behave as if your interlocutors are also aiming for convergence on truth."

My guess is that the idea that motivates Sabien's Fifth Guideline is something like "Assume by-default that people are contributing to the discourse in order to share true information and strong arguments, rather than posing as doing that while sharing arguments they don't believe or false information in order to win", out of a sense that there is indeed enough basic trust to realize this as an equilibrium, and also a sense that this is one of the ~best equilibriums for public discourse to be in.

One thing this post argues is that a person's motives are of little interest when one can assess their arguments. Argument screens off authority and many other things too. So we don't need to make these assumptions about people's motives. 

There's a sense in which I buy that, and yet also a sense in which the epistemic environment I'm in matters. Consider two possibilities:

  • I'm in an environment of people aspiring to "make true and accurate contributions to the discourse" but who are making many mistakes/failing.
  • I'm in an environment of people who are primarily sharing arguments and evidence filtered to sound convincing for positions that are convenient to them, and are pretending to be sort of people described in the first one.

I anticipate very different kinds of discussions, traps, and epistemic defenses I'll want to have in the two environments, and I do want to treat the individuals differently.

I think there is a sense in which I can just focus on local validity and evaluating the strength of arguments, and that this is generally more resilient to whatever the particular motives are of the people in the local environment, but my guess is that I should still relate to people and their arguments differently, and invest in different explanations or different incentives or different kinds of comment thread behavior.

I also think this provides good pushbacks on some possible behaviors people might take away from Sabien's fifth guideline. (I don't think that this post correctly understands what Sabien is going for, but I think bringing up reasonable hypotheses and showing why they don't make sense is helpful for people's understanding of how to participate well in discourse.)

Simplifying a bit, this is another entry in the long-running discourse on how adversarial one should model individuals in public discourse as, and what assumptions to make about other people's motives, and I think this provides useful arguments about that topic.

Comment by Ben Pace (Benito) on Basics of Rationalist Discourse · 2024-12-05T21:28:08.233Z · LW · GW

I give this a +9, one of the most useful posts of the year.

I think that a lot of these are pretty non-obvious guidelines that make sense when explained, and I continue to put effort in to practicing them. Separating observations and inferences is pro-social, making falsifiable claims is pro-social, etc.

I like this document both for carefully condensing the core ideas into 10 short guidelines, and also having longer explanations for those who want to engage with them.

I like that it’s phrased as guidelines rather than rules/norms. I do break these from time to time and endorse it.

I don't agree with everything, this is not an endorsement, I have many nuances and different phrasings and different standards, but I think this is a worthwhile document for people to study, especially those approaching this sort of discourse for the first time, and it's v well-written.

Comment by Ben Pace (Benito) on Elements of Rationalist Discourse · 2024-12-05T21:07:42.174Z · LW · GW

It's a fine post, but I don't love this set of recommendations and justifications, and I feel like rationalist norms & advice should be held to a high standard, so I'm not upvoting it in the review. I'll give some quick pointers to why I don't love it.

  1. Truth-Seeking: Seems too obvious to be useful advice. Also I disagree with the subpoint about never treating arguments like soldiers, I think two people inhabiting opposing debate-partners is sort of captured by this and I think this is a healthy truth-seeking process.
  2. Non-Violence: All the examples of things you're not supposed to do in response to an argument are things you're not supposed to do anyway. Also it seems too much like it's implying the only response to an argument is a counter-argument. Sometimes the correct response to bad argument is to fire someone or attempt to politically disempower them. As an example, Zvi Mowshowitz presents evidence and argument in Repeal the Jones Act of 1920 that there are a lot of terrible and disingenuous arguments being put forward by unions that are causing a total destruction of the US shipping industry. The generator here of arguments seems reliably non-truth-tracking, and I would approve of someone repealing the Jones act without persuading such folks or spending the time to refute each and every argument.
  3. Non-Deception: I'll quote the full description here:
    1. "Never try to steer your conversation partners (or onlookers) toward having falser models. Where possible, avoid saying stuff that you expect to lower the net belief accuracy of the average reader; or failing that, at least flag that you're worried about this happening."
    2. I think that the space of models one walks through is selected for both accuracy and usefulness. Not all models are equally useful. I might steer someone from a perfectly true but vacuous model, to a less perfect but more practical model, thereby net reducing the accuracy of a person's statements and beliefs (most of the time). I prefer something more like a standard of "Intent to Inform".

Various other ones are better, some are vague, many things are presented without justification and I suspect I might disagree if it was offered. I think Zack M. Davis's critique of 'goodwill' is good.

Comment by Ben Pace (Benito) on "Rationalist Discourse" Is Like "Physicist Motors" · 2024-12-05T20:36:31.686Z · LW · GW

I disagree with the first half of this post, and agree with the second half.

"Physicist Motors" makes sense to me as a topic. If I imagine it as a book, I can contrast it with other books like "Motors for Car Repair Mechanics" and "Motors for Hobbyist Boat Builders" and "Motors for Navy Contract Coordinators". These would focus on other aspects of motors such as giving you advice for materials to use and which vendors to trust or how to evaluate the work of external contractors, and give you more rules of thumb for your use case that don't rely on a great deal of complex mathematical calculations (e.g. "how to roughly know if a motor is strong enough for your boat as a function of the weight and surface area of the boat"). The "Physicist Motors" book would focus on the math of ideal motors and doing experiments to see the basic laws of physics at play.

Similarly, many places want norms of discourse, or have goals for discourse, and a rationalist focus would connect it to principles of truth-seeking more directly (e.g. in contrast with norms of "YouTube Discourse" or "Playful/Friendly Discourse").

So I don't believe that it is a confused thing to do, to outline practical heuristics or norms rationalist discourse as opposed to other kinds of discourse or other goals one might have with discourse.

In contrast, this critique seems of a valid type:

"A vague spirit of how to reason and argue" seems like an apt description of what "Basics of Rationalist Discourse" and "Elements of Rationalist Discourse" are attempting to codify—but with no explicit instruction on which guidelines arise from deep object-level principles of normative reasoning, and which from mere taste, politeness, or adaptation to local circumstances

Arguing that the principles/heuristics proposed are in conflict with the underlying laws of probability theory and such is a totally valid kind of critique. And I think the critique of the "goodwill" heuristic is pretty good.

My take is that if you positively vote on Bensinger's "Elements of Rationalist Discourse" then it makes sense to also upvote this post in the review as it is a counterpoint that has a good critique, but I wouldn't otherwise, as I disagree with the core analogy.

Comment by Ben Pace (Benito) on OpenAI, DeepMind, Anthropic, etc. should shut down. · 2024-12-05T07:02:42.986Z · LW · GW

Hear, hear!

Comment by Ben Pace (Benito) on OpenAI, DeepMind, Anthropic, etc. should shut down. · 2024-12-05T06:57:31.677Z · LW · GW

At least Anthropic didn't particularly try to be a big commercial company making the public excited about AI. Making the AI race a big public thing was a huge mistake on OpenAI's part, and is evidence that they don't really have any idea what they're doing.

I just want to point out, I don't believe this is the case, I believe that the CEO is attempting to play games with the public narrative that benefit his company financially.

Comment by Ben Pace (Benito) on Going Crazy and Getting Better Again · 2024-12-05T06:54:33.716Z · LW · GW

I... think that reading personal accounts of psychotic people is useful for understand the range of the human psyche and what insanity looks like? My guess is that on the margin it would be good for most people to have a better understanding of that, and reading this post will help, so I'm giving this a +1 for the LW review.

Comment by Ben Pace (Benito) on Going Crazy and Getting Better Again · 2024-12-05T06:52:39.058Z · LW · GW

Thanks for writing it.

Most of my time I worry about ways in which I and everyone around me may be insane in ways we haven't noticed. Reading this, I've thought for the first time that perhaps I and most people I know are doing quite well on the sanity axis.

Comment by Ben Pace (Benito) on Hell is Game Theory Folk Theorems · 2024-12-05T06:33:32.297Z · LW · GW

Fun post, but insofar as it's mostly expository of some basic game theory ideas, I think it doesn't do a good enough job of communicating that the starting assumption is that one is in a contrived (but logically possible) equilibrium. Scott Alexander's example is clearer about this. So I am not giving it a positive vote in the review (though I would for an edited version that fixed this issue).

Comment by Ben Pace (Benito) on AI Safety is Dropping the Ball on Clown Attacks · 2024-12-05T05:38:03.506Z · LW · GW

Did this happen yet? I would even just be into a short version of this (IMO good) post.

Comment by Ben Pace (Benito) on Linkpost: Rat Traps by Sheon Han in Asterisk Mag · 2024-12-04T00:22:01.090Z · LW · GW

Reading this post reminds me of my standard online heuristic: just because someone is spending a lot of effort writing about you, does not mean that it is worth a minute of your time to read it.

(This is of course a subset of the general heuristic that most writing has nothing worth reading in it; but it bears keeping in mind that this doesn't change when the writing is about you.)

Comment by Ben Pace (Benito) on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-04T00:02:59.000Z · LW · GW

I have made a Manifold market for predicting how much we will raise! Get your bets in.

Comment by Ben Pace (Benito) on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-11-30T23:20:49.274Z · LW · GW

I wanted a datapoint for Czynski's hypothesis that LW 2.0 killed the comment sections, so I checked how many comments your blogposts were getting in the first 3 months of 2017 (before LW 2.0 rebooted). There were 13 posts, and the comment counts were 0, 0, 2, 6, 9, 36, 0, 5, 0, 2, 0, 0, 2. (The 36 was a a political post in response to the US election, discussion of which I generally count as neutral or negative on LW, so I'd discount this.)

I'll try the same for Zvi. 13, 8, 3, 1, 3, 18, 2, 19, 2, 2, 2, 5, 3, 7, 7, 12, 4, 2, 61, 31, 79. That's more active (the end was his excellent sequence Against Facebook, and the last one was a call for people to share links to their blogs).

So that's not zero, there was something to kill. How do those numbers compare during LessWrong 2.0? My sense is that there's two Zvi eras, there's the timeless content (e.g. Mazes, Sabbaths, Simulacra) and the timeful content (e.g. Covid, AI, other news). The latter is a newer, more frequent, less deep writing style, so it's less apples to apples, so instead let's take the Moral Mazes sequence from 2020 (when LW 2.0 would've had a lot of time to kill Zvi's comments). I'm taking the 17 posts in this main sequence and counting the number of comments on LW and Wordpress.

#LWWordpress
1165
24019
32923
4812
5721
65610
7613
8128
9188
102118
112621
124216
13611
14915
151418
161119
172822
SUM349259

This shows the comment section on Wordpress about as active as it was in the 3-month period above (259 vs 284 comments) in the 2-months that the Mazes sequence was released, and comments were more evenly distributed (median of 17 vs 5). And it shows that the LessWrong comment section more than doubled the amount of discussion of the posts, without reducing the total discussion on Zvi's wordpress blog.

These bits of data aren't consistent with LW killing other blogs. FWIW my alternative hypothesis is that these things are synergistic (e.g. I also believe that the existence of LessWrong and the EA Forum increases discussion on each), and I think that is more consistent with the Zvi commenting numbers.

Comment by Ben Pace (Benito) on Making a conservative case for alignment · 2024-11-29T22:05:30.064Z · LW · GW

I agree that which terms people use vs taboo is a judgment call, I don't mean to imply that others should clearly see these things the same as me.

Comment by Ben Pace (Benito) on Making a conservative case for alignment · 2024-11-29T21:40:20.037Z · LW · GW

As my 2 cents, the phrase 'deadname' to me sounded like it caught on because it was hyperbolic and imputes aggression – similar to how phrases like trauma caught on (which used to primarily refer to physical damage like the phrase "blunt-forced trauma") and notions spread that "words can be violence" (which seems to me to be bending the meaning of words like 'violence' too far and is trying to get people on board for a level of censorship that isn't appropriate). I similarly recall seeing various notions on social media that not using the requested pronouns for transgender people constituted killing them due the implied background levels of violence towards such people in society.

Overall this leaves me personally choosing not to use the term 'deadname' and I reliably taboo it when I wish to refer to someone using the person's former alternative-gendered name.

Comment by Ben Pace (Benito) on Lighthaven Sequences Reading Group #12 (Tuesday 11/26) · 2024-11-28T00:36:06.346Z · LW · GW

I've updated future posts to have start time at 6:30 and doors open at 6pm.

Comment by Ben Pace (Benito) on Repeal the Jones Act of 1920 · 2024-11-28T00:34:35.101Z · LW · GW

Well that escalated quickly (at the very end).

Comment by Ben Pace (Benito) on Repeal the Jones Act of 1920 · 2024-11-28T00:22:34.074Z · LW · GW

cabotage

I assumed this was a typo for 'sabotage' the first time I saw it. For those wondering, here's a definition from google.

restriction of the operation of sea, air, or other transport services within or into a particular country to that country's own transport services.

Comment by Ben Pace (Benito) on Repeal the Jones Act of 1920 · 2024-11-27T23:56:31.036Z · LW · GW

By contrast, a report by the pro-Jones Act American Maritime Partnership claims ‘the Jones Act is responsible for’ 13,000 jobs and adding $3.3 billion to the economy, which means that is currently the value to Hawaii of all shipborne trade with America.

Noob question: is this supposed to be low or high? Or is this just a list of datapoints regardless of how they fall?

Comment by Ben Pace (Benito) on Information vs Assurance · 2024-11-27T20:46:02.595Z · LW · GW

Curated![1] I found this layout of how contracts/agreements get settled on in personal conversation very clarifying.

  1. ^

    "Curated", a term which here means "This just got emailed to 30,000 people, of whom typically half open the email, and it gets shown at the top of the frontpage to anyone who hasn't read it for ~1 week."

Comment by Ben Pace (Benito) on Lighthaven Sequences Reading Group #12 (Tuesday 11/26) · 2024-11-27T01:53:21.051Z · LW · GW

Oh interesting, thanks for the feedback. I think I illusion-of-transparency'd that people would feel fine about arriving in the 6:15-6:30 window. In my head the group discussions start at about 6:30. I'll make a note to update the description hopefully for next time.

Comment by Ben Pace (Benito) on Lighthaven Sequences Reading Group #12 (Tuesday 11/26) · 2024-11-27T00:06:55.113Z · LW · GW

Oops! That was a mistake. I guess we're re-examining these ones tonight (they are pretty good ones). I have a spreadsheet for tracking this sort of thing, I will make some adjustment there to avoid this mistake again.

Edit: Oh I think I was on vacation that week, which is why I didn't notice.

Comment by Ben Pace (Benito) on Lighthaven Sequences Reading Group #12 (Tuesday 11/26) · 2024-11-26T20:37:31.014Z · LW · GW

Three films got 4 upvotes, and my favorite of them is The Big Short, so that's what we're watching tonight!

Comment by Ben Pace (Benito) on Making a conservative case for alignment · 2024-11-25T00:09:48.877Z · LW · GW

I could believe it, but my (weak) guess is that in most settings people care about which pronoun they use far less than they care about people not being confused about who is being referred to.

Comment by Ben Pace (Benito) on Making a conservative case for alignment · 2024-11-25T00:02:39.018Z · LW · GW

My rough take: the rationalist scene in Berkeley used to be very bad at maintaining boundaries. Basically the boundaries were "who gets invited to parties by friends". The one Berkeley community space ("REACH") was basically open-access. In recent years the Lightcone team (of which I am a part) has hosted spaces and events and put in the work to maintain actual boundaries (including getting references on people and checking out suspicion of bad behavior, but mostly just making it normal for people to have events with standards for entry) and this has substantially improved the ability for rationalist spaces to have culture that is distinct from the local Berkeley culture.

Comment by Ben Pace (Benito) on Making a conservative case for alignment · 2024-11-24T23:32:27.942Z · LW · GW

Is there literally any scene in the world that has openly transgender people in it and does 3, 4, or 5? Like, a space where a transgender person is friendly with the people there and different people in a conversation are reliably using different pronouns to refer to the same person? My sense is that it's actively confusing in a conversation for the participants to not be consistent in the choice of someone's pronouns. 

I guess I've often seen people default to 'they' a lot for people who have preferred pronouns that are he/she, that seems to go by just fine even if some people use he / she for the person, but I can't recall ever seeing a conversation where one person uses 'he' and another person uses 'she' when both are referring to the same person.

Comment by Ben Pace (Benito) on Benito's Shortform Feed · 2024-11-24T01:47:10.007Z · LW · GW

Thanks for the thoughts! I've not thought about this topic that much before, so my comment(s) will be longer as I'm figuring it out for myself, and in the process of generating hypotheses.

I'm hearing you say that while I have drawn some distinctions, that overall these groups still have major similarities, so the term accurately tracks reality and is helpful.

On further reflection I'm more sympathetic to this point; but granting it I'm still concerned that the term is net harmful for thinking.

My current sense is that a cult is the name given to a group that has gone off the rails. The group has 

  • some weird beliefs
  • intends to behave in line with those beliefs
  • seems unable to change course
  • the individuals seem unable to change their mind
  • and the behavior seems to outsiders to be extremely harmful.

My concern is that the following two claims are true:

  1. There are groups with seemingly closed epistemologies and whose behavior has a large effect size, in similar ways to groups widely considered to be 'cults', yet the outcomes are overall great and worth supporting.
  2. There are groups with seemingly closed epistemologies and whose behavior has a large effect size, in similar ways to groups widely considered to be 'cults', yet are not called cults because they have widespread political support.

I'll talk through some potential examples.

Startups

Peter Thiel has said that a successful startup feels a bit like a cult. Many startups are led by a charismatic leader who believes in the product, surrounded by people who believe in the leader and the product, where outsiders don't get it at all and think it's a waste of time. The people in the company work extreme hours, regularly hitting sleep deprivation, and sometimes invest their savings into the project. The internal dynamics are complicated and political and sometimes cut-throat. Sometimes this pays off greatly, like with Tesla/SpaceX/Apple. Other times it doesn't, like with WeWork, or FTX, or just most startups where people work really hard and nothing comes of it.

I'd guess there are many people in this world who left a failed startup in a daze, wondering why they dedicated some of the best years of their lives to something and someone that in retrospect clearly wasn't worth it, not entirely dissimilar to someone leaving a more classical cult. However, it seems likely to me the distribution of startups is well-worth-it for civilization as a whole (with the exception of suicidal AI-companies).

(This is a potential example of number 1 above.)

Religions

Major religions have often done things just as insane and damaging as smaller cults, but aren't called cults. The standard list of things includes oppression of homosexuality and other sexualities, subjugation of women, genital mutilation, blasphemy laws, opposition to contraception in developing countries (exacerbating the spread of HIV/AIDS), death orders, censorship, and more.

It seems plausible to me that someone would do far more harm and become far more closed in their epistemology via joining the Islamic Republic of Iran or the Holy See in the Vatican than if they joined Scientology or one of the many other things that get called cults (e.g. a quick googling came up with cryptocurrencies, string theory, Donal Trump, and PETA). Yet it seems to me that these aren't given as examples of cults, only the smaller religions that are easier to oppose and which have little political power get that name. Scientology seems to be the most powerful one where people feel like they can get away with calling it a cult.

(This is a potential example of number 2 above.)

Education

A hypothesis I take seriously is that schooling is a horrible experience for kids, and the systems don't change because children are often not respected as whole people and can be treated as subhuman.

  • Kids are forced to sit still for something like more-than-10% of the hours of their childhood, and regularly complain about this and seem to me kind of psychologically numbed by it.
  • I seem to recall a study that all homework other than mathematics had zero effect on learning success, and also I think I recall a study from Scandinavia where kids who joined school when they were 7 or 8 quickly caught up to their peers (suggesting the previous years had been ~pointless). I suspect Bryan Caplan's book-length treatment of education will have some reliable info making this point (even though I believe he focuses on higher education).
  • I personally found university a horrible experience. Leaving university I had a strong sense of "I need to get away from this, why on Earth did I do that?" and a sense that everyone there was kind of in on a mass delusion where your status in the academic system was very important and mattered a great deal and you should really care about the system. A few years ago I had a phone call with an old friend from high-school who was still studying in the education system at the age of ~25, and I encouraged them to get out of it and grow up into a whole person.

There's not a charismatic leader here, but I believe there's some mass delusion and very harmful outcomes. I don't think the education system should be destroyed, but I think it probably causes more harm than many things more typically understood to be cults (as most groups with dedicated followings and charismatic leaders have very little effect size either way), and my sense is that many people involved are extremely resistant that they are not doing what's best for the children or are doing some bad things.

(This is a potential example of both numbers 1 and 2 above.)

———

To repeat: my concern is that the things that are common to cults is more like "what groups with closed epistemologies and unusual behavior is it easy to coordinate on destroying" rather than "what groups have closed epistemologies and behavior with terrible effects". 

If so, while I acknowledge that many of the groups that are widely described as "cults" probably have closed epistemologies and cause a lot of damage, I am concerned that whether a group is called a cult is primarily a political question about whether you can backing for destroying it in this case.

Comment by Ben Pace (Benito) on What are the good rationality films? · 2024-11-22T19:05:43.772Z · LW · GW

...when I saw the notification that you'd left an answer, I really thought you were going to say "Fight Club".

Comment by Ben Pace (Benito) on Benito's Shortform Feed · 2024-11-22T08:31:29.750Z · LW · GW

I was chatting with someone, and they said that a particular group of people seemed increasingly like a cult. I thought that was an unhelpful framing, and here's the rough argument I wrote for why:

  1. There's lots of group dynamics that lead a group of people to go insane and do unethical things.
  2. The dynamics around Bankman-Fried involve a lot of naivety when interfacing with an sociopath who was scamming people for billions of dollars on a massive scale.
  3. The dynamics around Leverage Research involved lots of people with extremely little savings and income in a group house trying to do 'science' to claims of paranormal phenomena.
  4. The dynamics around Jonestown involves total isolation from family, public humiliation and beatings for dissent, and a leader with personal connection to the divine.
  5. These have all produced some amounts of insane and unethical behavior, to different extents, for quite different reasons.
  6. They all deserve to be opposed to some extent. And it is pro-social to share information about their insanity and bad behavior.
  7. Calling them 'cults' communicates that these are groups that have gone insane and done terrible things, but it also communicates that these groups are all the same, when in fact there's not always public beatings or paranormal phenomena or billions of dollars, and the dynamics are very different.
  8. Conflating them confuses outside people, they have a harder time understanding whether the group is actually insane and what the dynamics are.
Comment by Ben Pace (Benito) on Which things were you surprised to learn are not metaphors? · 2024-11-21T19:12:09.531Z · LW · GW

I also was kind of surprised when it turned out 'gut feeling' actually meant a feeling in your belly-area.

Added: I wonder if the notion of 'having a hunch' comes from something that causes you to hunch over?

Comment by Ben Pace (Benito) on What are the good rationality films? · 2024-11-21T17:20:43.303Z · LW · GW

There's also a scene where one of the older traders makes a fermi estimate but doesn't round any numbers to their order of magnitude. That gave me the sense that they're earnestly trying to play autistic nerds but don't quite know autistic nerd culture well enough.

Comment by Ben Pace (Benito) on What are the good rationality films? · 2024-11-20T22:21:09.927Z · LW · GW

Please can you move the epistemic status and warning to the top? I was excited when I first skimmed this detailed comment, but then I was disappointed :/ (Edit: Thank you!)

Comment by Ben Pace (Benito) on What are the good rationality films? · 2024-11-20T18:34:13.819Z · LW · GW

Absolutely, for years YouTube has offered me back to back clips of both, so I've watched parts of it many times (and the whole thing through once).

Comment by Ben Pace (Benito) on What are the good rationality films? · 2024-11-20T06:32:13.984Z · LW · GW

The Big Short

Rationality Tie-in: 

This is a film about the 2008 Financial Market Crash, and tells the stories of...

...the three groups who noticed it would happen, believed it would happen, and successfully bet on their beliefs. It shows people going through the work of noticing an inconvenient hypothesis, being in an environment where people encouraged them to look away from it, empirically gathering data to test the hypothesis, and interacting with large institutions and bureaucracies that are corrupt and covering up this fact.

I think in most films the main characters of these films would be side-characters, contrarian nerds that the protagonist works with to get the job done, and then he takes the glory. In this story the contrarian nerds are the protagonists, and it's very unpleasant work, but ultimately they have accurate beliefs about the world in a highly adversarial environment.

The Big Short is the filmic equivalent of my spirit-animal.

Rationality writings it is connected to: 

I thought this would be hard, but actually it ties into so much.

  • Lonely Dissent: This film portrays the actual pain and suffering of believing what is true when so much of the world is pressuring you to not believe it, and the truth is itself extremely a lot worse than everyone else believes it to be. Seeing this hopefully helps break your trust in the world to be fine (cf. No Safe Defense, Not Even Science, and Beyond the Reach of God).
  • Argument Screens Off Authority: Most of the powerful authorities say everything is fine. In this film some of the characters go and empirically test the hypothesis that they are wrong anyway. (Related: Hug the Query, The Proper Use of Humility).
  • Faster Than Science: You need to rely on processes that are faster than waiting for the evidence to become incontrovertible such that everyone is forced to believe it. Yes, you can find out about catastrophes like the housing market crash or FTX by waiting for it all to collapse, but if you want to not face the terrible downfall then you have to notice before it has caused a catastrophe. (Related: Einstein's Arrogance, Einstein's Speed)
  • Meditations On Moloch by Scott Alexander. These men find themselves in a war with Moloch. They win their fight, but the war is lost. (Related: Immoral Mazes by Zvi Mowshowitz.)
Comment by Ben Pace (Benito) on Lighthaven Sequences Reading Group #12 (Tuesday 11/26) · 2024-11-20T04:55:47.076Z · LW · GW

"The Big Short" by Adam McKay

Comment by Ben Pace (Benito) on Lighthaven Sequences Reading Group #12 (Tuesday 11/26) · 2024-11-20T04:53:57.377Z · LW · GW

"Asteroid City" by Wes Anderson

Comment by Ben Pace (Benito) on Lighthaven Sequences Reading Group #12 (Tuesday 11/26) · 2024-11-20T04:48:40.442Z · LW · GW

"2001: A Space Odyssey" by Stanley Kubrick

Comment by Ben Pace (Benito) on Lighthaven Sequences Reading Group #12 (Tuesday 11/26) · 2024-11-20T04:46:26.227Z · LW · GW

Poll For Films to Watch

Use this thread to

  1. Thumbs up films you'd like to watch
  2. Thumbs down films you would not watch
  3. Add new films for people to vote on
Comment by Ben Pace (Benito) on Reformative Hypocrisy, and Paying Close Enough Attention to Selectively Reward It. · 2024-11-18T02:31:35.349Z · LW · GW

Hm, but I note others at the time felt it clear that this would exacerbate the competition (1, 2).

Comment by Ben Pace (Benito) on Dragon Agnosticism · 2024-11-18T02:11:22.902Z · LW · GW

Then I shall continue to tend to and grow my garden.

Comment by Ben Pace (Benito) on Dragon Agnosticism · 2024-11-18T02:03:51.992Z · LW · GW

It’s going pretty well for me! Most people I work with or am friends with know that there are multiple topics on which my thoughts are private, and there have been ~no significant social costs to me that I’m aware of.

I would like to be informed of opportunities to support others in this on LessWrong or in the social circles I participate in, to back you up if people are applying pressure on you to express your thoughts on a topic that you don’t want to talk about.

Comment by Ben Pace (Benito) on Lao Mein's Shortform · 2024-11-17T22:53:36.829Z · LW · GW

I know little enough that I don't know whether this statement is true. I would've guessed that in most $10B companies anyone with a title like "CFO" and "CTO" and "COO" is paid primarily in equity, but perhaps this is mostly true of a few companies I've looked into more (like Amazon).

Comment by Ben Pace (Benito) on Announcing turntrout.com, my new digital home · 2024-11-17T22:48:04.500Z · LW · GW

I am sad, but also I think it will probably be good for TurnTrout to have more distance.

Comment by Ben Pace (Benito) on Dragon Agnosticism · 2024-11-17T22:46:19.965Z · LW · GW

Also, a norm of "allowing people to keep their beliefs private on subjects they feel a lot of pressure on" gives space for people to gather information personally without needing to worry about the pressures on them from their society.

Comment by Ben Pace (Benito) on Sabotage Evaluations for Frontier Models · 2024-11-17T21:37:21.388Z · LW · GW

I have found it fruitful to argue this case back and forth with you, thank you for explaining and defending your perspective.

I will restate my overall position, I invite you to do the same, and then I propose that we consider this 40+ comment thread concluded for now.

———

The comment of yours that (to me) started this thread was the following.

If the default path is AI's taking over control from humans, then what is the current plan in leading AI labs? Surely all the work they put in AI safety is done to prevent exactly such scenarios. I would find it quite hard to believe that a large group of people would vigorously do something if they believed that their efforts will go to vain.

I primarily wish to argue that, given the general lack of accountability for developing machine learning systems in worlds where indeed the default outcome is doom, it should not be surprising to find out that there is a large corporation (or multiple) doing so. One should not assume that the incentives are aligned – anyone who is risking omnicide-level outcomes via investing in novel tech development currently faces no criminal penalties, fines, or international sanctions.

Given the current intellectual elite scene where a substantial number of prestigious people care about extinction level outcomes, it is also not surprising that glory-seeking companies have large departments focused on 'ethics' and 'safety' in order to look respectable to such people. Separately from any intrinsic interest, it has been a useful political chip for enticing a great deal of talent from scientific communities and communities interested in ethics to work for them (not dissimilar to how Sam Bankman-Fried managed to cause a lot of card-carrying members of the Effective Altruist philosophy and scene to work very hard to help build his criminal empire by talking a good game about utilitarianism, veganism, and the rest).

Looking at a given company's plan for preventing doom, and noticing it does not check out, should not be followed by an assumption of adequacy and good incentives such that surely this company would not exist nor do work on AI safety if it did not have a strong plan, I must be mistaken. I believe that there is no good plan and that these companies would exist regardless of whether a good plan existed or not. Given the lack of accountability, and my belief that alignment is clearly unsolved and we fundamentally do not knowing what we're doing, I believe the people involved are getting rich risking all of our lives and there is (currently) no justice here.

We have agreed on many points, and from the outset I believe you felt my position had some truth to it (e.g. "I do get that point that you are making, but I think this is a little bit unfair to these organizations."). I will leave you to outline whichever overall thoughts and remaining points of agreement or disagreement that you wish.

Comment by Ben Pace (Benito) on Sabotage Evaluations for Frontier Models · 2024-11-17T18:45:30.516Z · LW · GW

If a medicine literally kills everyone who takes it within a week of taking it, sure, it will not get widespread adoption amongst thousands of people.

If the medicine has bad side effects for 1 in 10 people and no upsides, or it only kills people 10 years later, and at the same time there is some great financial deal the ruler can make for himself in accepting this trade with the neighboring nation who is offering the vaccines, then yes I think that could easily be enough pressure for a human being to rationalize that actually the vaccines are good.

The relevant question is rarely 'how high stakes is the decision'. The question is what is in need of rationalizing, how hard is it to support the story, and how strong are the pressures on the person to do that. Typically when the stakes are higher, the pressures on people to rationalize are higher, not lower.

Politicians often enact policies that make the world worse for everyone (including themselves) while thinking they're doing their job well, due to the various pressures and forces on them. The fact that it arguably increases their personal chance of death isn't going to stop them, especially when they can easily rationalize it away because it's abstract. In recent years politicians in many countries enacted terrible policies during a pandemic that extended the length of the pandemic (there were no challenge trials, there was inefficient handing out of vaccines, there were false claims about how long the pandemic would last, there were false claims about mask effectiveness, there were forced lockdown policies that made no sense, etc). These policies hurt people and messed up the lives of ~everyone in the country I live in (the US), which includes the politicians who enacted them and all of their families and friends. Yet this was not remotely sufficient to cause them to snap out of it.

What is needed to rationalize AI development when the default outcome is doom? Here’s a brief attempt:

  • A lot of people who write about AI are focused on current AI capabilities and have a hard time speculating about future AI capabilities. Talk with these people. This helps you keep the downsides in far mode and the upsides in near mode (which helps because current AI capabilities are ~all upside, and pose no existential threat to civilization). The downsides can be pushed further into far mode with phrases like 'sci-fi' and 'unrealistic'.
  • Avoid arguing with people or talking with people who have thought a great deal about this and believe the default outcome is doom and the work should be stopped (e.g. Hinton, Bengio, Russell, Yudkowsky, etc). This would put pressure on you to keep those perspectives alive while making decisions, which would cause you to consider quitting.
  • Instead of acknowledging that we don't fundamentally don't know what we're doing, instead focus on the idea that other people are going to plough ahead. Then you can say that you have a better chance than them, rather than admitting neither of you have a good chance.

This puts you into a mental world where you're basically doing a good thing and you're not personally responsible for much of the extinction-level outcomes.

Intentionally contributing to omnicide is not what I am describing. I am describing a bit of rationalization in order to receive immense glory, and that leading to omnicide-level outcomes 5-15 years down the line. This sort of rationalizing why you should take power and glory is frequent and natural amongst humans.

Comment by Ben Pace (Benito) on Reformative Hypocrisy, and Paying Close Enough Attention to Selectively Reward It. · 2024-11-17T04:15:57.869Z · LW · GW

Thanks for expressing this perspective.

I note Musk was the first one to start a competitor, which seems to me to be very costly.

I think that founding OpenAI could be right if the non-profit structure was likely to work out. I don't know if that made sense at the time. Altman has overpowered getting fired by the board, removed parts of the board, and rumor has it he is moving to a for-profit, which is strong evidence against the non-profit being able to withstand the pressures that were coming, but even without Altman I suspect it would still involve billions of $ of funding, partnerships like the one with Microsoft, and other for-profit pressures to be the sort of player it is today. So I don't know that Musk's plan was viable at all.

Comment by Ben Pace (Benito) on Lao Mein's Shortform · 2024-11-17T03:29:29.771Z · LW · GW

Maybe there's a hope there, but I'll point out that many of the people needed to run a business (finance, legal, product, etc) are not idealistic scientists who would be willing to have their equity become worthless.