Posts

ACX September Meetup 2023-08-28T17:40:17.023Z
Calgary ACX Meetup 2023-06-16T04:49:47.554Z
Calgary, Alberta, Canada – ACX Meetups Everywhere Spring 2023 2023-04-10T22:01:36.181Z
Let's make the truth easier to find 2023-03-20T04:28:41.405Z
Calgary, AB – ACX Meetups Everywhere 2022 2022-08-24T22:57:16.140Z
Where do you live? 2021-10-31T17:07:31.294Z
Trust and The Small World Fallacy 2021-10-04T00:38:45.208Z
Calgary, AB – ACX Meetups Everywhere 2021 2021-08-23T08:45:03.493Z
Covid vaccine safety: how correct are these allegations? 2021-06-13T03:08:23.858Z

Comments

Comment by DPiepgrass on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-04-04T05:57:10.752Z · LW · GW

Speaking for myself: I don't prefer to be alone or tend to hide information about myself. Quite the opposite; I like to have company but rare is the company that likes to have me, and I like sharing, though it's rare that someone cares to hear it. It's true that I "try to be independent" and "form my own opinions", but I think that part of your paragraph is easy to overlook because it doesn't sound like what the word "avoidant" ought to mean. (And my philosophy is that people with good epistemics tend to reach similar conclusions, so our independence doesn't necessarily imply a tendency to end up alone in our own school of thought, let alone prefer it that way.)

Now if I were in Scott's position? I find social media enemies terrifying and would want to hide as much as possible from them. And Scott's desire for his name not to be broadcast? He's explained it as related to his profession, and I don't see why I should disbelieve that. Yet Scott also schedules regular meetups where strangers can come, which doesn't sound "avoidant". More broadly, labeling famous-ish people who talk frequently online as "avoidant" doesn't sound right.

Also, "schizoid" as in schizophrenia? By reputation, rationalists are more likely to be autistic, which tends not to co-occur with schizophrenia, and the ACX survey is correlated with this reputation. (Could say more but I think this suffices.)

Comment by DPiepgrass on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-04-04T05:22:52.174Z · LW · GW

Scott tried hard to avoid getting into the race/IQ controversy. Like, in the private email LGS shared, Scott states "I will appreciate if you NEVER TELL ANYONE I SAID THIS". Isn't this the opposite of "it's self-evidently good for the truth to be known"? And yes there's a SSC/ACX community too (not "rationalist" necessarily), but Metz wasn't talking about the community there.

My opinion as a rationalist is that I'd like the whole race/IQ issue to f**k off so we don't have to talk or think about it, but certain people like to misrepresent Scott and make unreasonable claims, which ticks me off, so I counterargue, just as I pushed a video by Shaun once when I thought somebody on ACX sounded a bit racist to me on the race/IQ topic.

Scott and myself are consequentialists. As such, it's not self-evidently good for the truth to be known. I think some taboos should be broached, but not "self-evidently" and often not by us. But if people start making BS arguments against people I like? I will call BS on that, even if doing so involves some discussion of the taboo topic. But I didn't wake up this morning having any interest in doing that.

Comment by DPiepgrass on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-04-04T04:25:51.176Z · LW · GW

Huh? Who defines racism as cognitive bias? I've never seen that before, so expecting Scott in particular to define it as such seems like special pleading.

What would your definition be, and why would it be better?

Scott endorses this definition:

Definition By Motives: An irrational feeling of hatred toward some race that causes someone to want to hurt or discriminate against them.

Setting aside that it says "irrational feeling" instead of "cognitive bias", how does this "tr[y] to define racism out of existence"?

Comment by DPiepgrass on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-04-04T04:05:47.189Z · LW · GW

I think about it differently. When Scott does not support an idea, but discusses or allows discussion of it, it's not "making space for ideas" as much as "making space for reasonable people who have ideas, even when they are wrong". And I think making space for people to be wrong sometimes is good, important and necessary. According to his official (but confusing IMO) rules, saying untrue things is a strike against you, but insufficient for a ban.

Also, strong upvote because I can't imagine why this question should score negatively.

Comment by DPiepgrass on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-04-04T02:05:20.634Z · LW · GW

Scott had every opportunity to say "actually, I disagree with Murray about..." but he didn't, because he agrees with Murray

[citation needed] for those last four words. In the paragraph before the one frankybegs quoted, Scott said:

Some people wrote me to complain that I handled this in a cowardly way - I showed that the specific thing the journalist quoted wasn’t a reference to The Bell Curve, but I never answered the broader question of what I thought of the book. They demanded I come out and give my opinion openly. Well, the most direct answer is that I've never read it.

Having never read The Bell Curve, it would be uncharacteristic of him to say "I disagree with Murray about [things in The Bell Curve]", don't you think?

Comment by DPiepgrass on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-04-04T01:53:14.364Z · LW · GW

Strong disagree based on the "evidence" you posted for this elsewhere in this thread. It consists one-half of some dude on Twitter asserting that "Scott is a racist eugenics supporter" and retweeting other people's inflammatory rewordings of Scott, and one-half of private email from Scott saying things like

HBD is probably partially correct or at least very non-provably not-correct

It seems gratuitous for you to argue the point with such biased commentary. And what Scott actually says sounds like his judgement of ... I'm not quite sure what, since HBD is left without a definition, but it sounds a lot like the evidence he mentioned years later from 

(yes, I found the links I couldn't find earlier thanks to a quote by frankybegs from this post which―I was mistaken!―does mention Murray and The Bell Curve because he is responding to Cade Metz and other critics).

This sounds like his usual "learn to love scientific consensus" stance, but it appears you refuse to acknowledge a difference between Scott privately deferring to expert opinion, on one hand, and having "Charles Murray posters on his bedroom wall".

Almost the sum total of my knowledge of Murray's book comes from Shaun's rebuttal of it, which sounded quite reasonable to me. But Shaun argues that specific people are biased and incorrect, such as Richard Lynn and (duh) Charles Murray. Not only does Scott never cite these people, what he said about The Bell Curve was "I never read it". And why should he? Murray isn't even a geneticist!

So it seems the secret evidence matches the public evidence, does not show that "Scott thinks very highly of Murray", doesn't show that he ever did, doesn't show that he is "aligned" with Murray etc. How can Scott be a Murray fanboy without even reading Murray?

You saw this before:

I can't find any expert surveys giving the expected result that they all agree this is dumb and definitely 100% environment and we can move on (I'd be very relieved if anybody could find those, or if they could explain why the ones I found were fake studies or fake experts or a biased sample, or explain how I'm misreading them or that they otherwise shouldn't be trusted. If you have thoughts on this, please send me an email). I've vacillated back and forth on how to think about this question so many times, and right now my personal probability estimate is "I am still freaking out about this, go away go away go away". And I understand I have at least two potentially irresolveable biases on this question: one, I'm a white person in a country with a long history of promoting white supremacy; and two, if I lean in favor then everyone will hate me, and use it as a bludgeon against anyone I have ever associated with, and I will die alone in a ditch and maybe deserve it.

You may just assume Scott is lying (or as you put it, "giving a maximally positive spin on his own beliefs"), but again I think you are conflating. To suppose experts in a field have expertise in that field isn't merely different from "aligning oneself" with a divisive conservative political scientist whose book one has never read ― it's really obviously different how are you not getting this??

Comment by DPiepgrass on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T11:23:10.859Z · LW · GW

he definitely thinks this

He definitely thinks what, exactly?

Anyway, the situation is like: X is writing a summary about author Y who has written 100 books, but pretty much ignores all those books in favor of digging up some dirt on what Y thinks about a political topic Z that Y almost never discusses (and then instead of actually mentioning any of that dirt, X says Y "aligned himself" with a famously controversial author on Z.)

It's really weird to go HOW DARE YOU when someone says something you know is true about you, and I was always unnerved by this reaction from Scott's defenders

It's not true though. Perhaps what he believes is similar to what Murray believes, but he did not "align himself" with Murray on race/IQ. Like, if an author in Alabama reads the scientific literature and quietly comes to a conclusion that humans cause global warming, it's wrong for the Alabama News to describe this as "author has a popular blog, and he has aligned himself with Al Gore and Greta Thunberg!" (which would tend to encourage Alabama folks to get out their pitchforks 😉) (Edit: to be clear, I've read SSC/ACX for years and the one and only time I saw Scott discuss race+IQ, he linked to two scientific papers, didn't mention Murray/Bell Curve, and I don't think it was the main focus of the post―which makes it hard to find it again.)

Comment by DPiepgrass on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T10:55:23.175Z · LW · GW

I agree, except for the last statement. I've found that talking to certain people with bad epistemology about epistemic concepts will, instead of teaching them concepts, teach them a rhetorical trick that (soon afterward) they will try to use against you as a "gotcha" (related)... as a result of them having a soldier mindset and knowing you have a different political opinion.

While I expect most of them won't ever mimic rationalists well, (i) mimicry per se doesn't seem important and (ii) I think there are a small fraction of people (tho not Metz) who do end up fostering a "rationalist skin" ― they talk like rationalists, but seem to be in it mostly for gotchas, snipes and sophistry.

Comment by DPiepgrass on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T10:09:19.750Z · LW · GW

I'm thinking it's not Metz's job to critique Scott, nor did his article admit to being a critique, but also that that's a strawman; Metz didn't publish the name "in order to" critique his ideas. He probably published it because he doesn't like the guy.

Why doesn't he like Scott? I wonder if Metz would've answered that question if asked. I doubt it: he wrote "[Alexander] aligned himself with Charles Murray, who proposed a link between race and I.Q." even though Scott did not align himself with Murray about race/IQ, nor is Murray a friend of his, nor does Alexander promote Murray, nor is race/IQ even 0.1% of what Scott/SSC/rationalism is about―yet Metz defends his misleading statement and won't acknowledge it's misleading. If he had defensible reasons to dislike Scott that he was willing to say out loud, why did he instead resort to tactics like that?

(Edit: I don't read/follow Metz at all, so I'll point to Gwern's comment for more insight)

Comment by DPiepgrass on Let's make the truth easier to find · 2024-02-11T15:44:21.258Z · LW · GW

there are certain realities about what happens when you talk about politics.

Says the guy who often wades into politics and often takes politically-charged stances on LW/EAF. You seem to be correct, it's just sad that the topic you are correct about is the LessWrong community.

Part of what the sequences are about is to care about reality and you prefer to be in denial of it

How charitable of you. I was misinformed: I thought rationalists were (generally) not mind-killed. And like any good rationalist, I've updated on this surprising new evidence. (I still think many are not, but navigating such diversity is very challenging.)

Then you are wrong.

Almost every interaction I've ever had with you has been unpleasant. I've had plenty of pleasant interactions, so I'm confident about which one of us this is a property of, and you can imagine how much I believe you. Besides which, it's implausible that you remember your thought processes in each of the hundred-ish comments you've made in the last year. For me to be wrong means you recollected the thought process that went into a one-sentence snipe, as in "oh yeah I remember that comment, that's the one where I did think about what he was trying to communicate and how he could have done better, but I was busy that day and had to leave a one-sentence snipe instead."

Talking about it does not trigger people's tribal senses the same way as talking about contemporary political conflicts. 

Odd but true. Good point.

there are also plenty of things that happened in the 20th century that were driven by bad epistemics

No doubt, and there might even be many that are clear-cut and no longer political for most people. But there are no such events I am knowledgeable about.

You don't want people to focus on big consequences

Yes, I do. I want people to sense the big consequences, deeply and viscerally, in order to generate motivation. Still, a more academic reformulation may also be valuable.

Comment by DPiepgrass on Let's make the truth easier to find · 2024-02-11T03:38:25.399Z · LW · GW

Yes there is. I gave examples that were salient to me, which I had a lot of knowledge about.

And my audience was LessWrong, which I thought could handle the examples like mature adults.

But my main takeaway was flak from people telling me that an evidence repository is unnecessary because "true claims sound better" and, more popularly, that my ideas are "suspicious"―not with any allegation that I said anything untrue*, or that my plan wouldn't work, or that the evidence I supplied was insufficient or unpersuasive, or that I violated any rationalist virtue whatsoever, but simply because the evidence was "political".

If you know of some non-political examples which have had as much impact on the modern world as the epistemic errors involved in global warming policy and the invasion of Ukraine, by all means tell me. I beg you. And if not, how did you expect me to make the point that untrue beliefs have large negative global impacts? But never mind; I'm certain you gave no thought to the matter. It feels like you're just here to tear people down day after day, month after month, year after year. Does it make you feel good? What drives you?

Edit: admittedly that's not very plausible as a motive, but here's something that fits better. Knowing about biases can hurt people, but there's no reason this would be limited only to knowledge about biases. You discovered that there's no need to use rationalist principles for truthseeking; you can use them instead as spitballs to toss at people―and then leave the room before any disagreements are resolved. Your purpose here, then, is target practice. You play LessWrong the way others play PUBG. And there may be many spitballers here, you're just more prolific.

* except this guy, whom I thank for illustrating my point that existing forums are unsuitable for reasonably arbitrating factual disagreements.

Comment by DPiepgrass on Open Thread June 2010, Part 4 · 2024-01-16T17:33:37.721Z · LW · GW

I thought that what I'm about to say is standard, but perhaps it isn't. [...] Pearl also has written Bayesian algorithms

I have been googling for references to "computational epistemology", "algorithmic epistemology", "bayesian algorithms" and "epistemic algorithm" on LessWrong, and (other than my article) this is the only reference I was able to find to things in the vague category of (i) proposing that the community work on writing real, practical epistemic algorithms (i.e. in software), (ii) announcing having written epistemic algorithms or (iii) explaining how precisely to perform any epistemic algorithm in particular. (A runner-up is this post which aspires to "focus on the ideal epistemic algorithm" but AFAICT doesn't really describe an algorithm.)

Who is "Pearl"?

Comment by DPiepgrass on Sam Altman's sister, Annie Altman, claims Sam has severely abused her · 2023-11-20T22:38:15.309Z · LW · GW

While Annie didn't reply to the "confirm/deny" tweet, she did quote-tweet ittwice:

Wow, thank you. This feels like a study guide version of a big chunk of my therapy discussions. Yes can confirm accuracy. Need some time to process, and then can specify details of what happened with both my Dad and Grandma’s will and trust

Thank you more than words for your time and attention researching. All accurate in the current form, except there was no lawyer connected to the “I’ll give you rent and physical therapy money if you go back on Zoloft”

Comment by DPiepgrass on Sam Altman's sister, Annie Altman, claims Sam has severely abused her · 2023-11-20T22:31:02.089Z · LW · GW

Annie didn't say specifically that Jack sexually abused her, though; her language indicated some unspecified lesser abuse that may or may not have been sexual.

Comment by DPiepgrass on Sam Altman's sister, Annie Altman, claims Sam has severely abused her · 2023-11-20T22:16:44.285Z · LW · GW

Neither Sam nor Annie count as "the outgroup". I'm sure some LWers disagree with Sam about how to manage the development of AGI, but if Sam visited LW I expect it would be a respectful two-way discussion, not a flame war like you'd expect with an "outgroup". (caveat: I don't know how attitudes about Sam will change as a result of the recent drama at OpenAI.)

Comment by DPiepgrass on "Flinching away from truth” is often about *protecting* the epistemology · 2023-09-26T00:11:53.273Z · LW · GW

The teacher looks a bit apologetic, but persists: “‘Ocean’ is spelt with a ‘c’ rather than an ‘sh’; this makes sense, because the ‘e’ after the ‘c’ changes its sound…”

I like how true-to-life this is. In fact it doesn't makes sense, as 'ce' is normally pronounced with 's', not 'sh', so the teacher is unwittingly making this hard for the child. Many such cases. (But also many cases where the teacher's reasoning is flawless and beautiful and instantly rejected.)

This post seems to be about Conflation Fallacies (especially subconscious ones) rather than a new concept involving buckets, so I'm not a big fan of the terminology, but the discussion is important & worthwhile so +1 for that, though it seems like a better title would be '"Flinching away from truth" is often caused by internal conflation" or "bucket errors" if you like.

Comment by DPiepgrass on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2023-09-25T23:37:50.485Z · LW · GW

Though I don't remember people saying explicitly that Eliezer Yudkowsky was a better philosopher than Kant, I would guess many would have said so.

Reminds me of a Yudkowsky quote:

Science isn't fair.  That's sorta the point.  An aspiring rationalist in 2007 starts with a huge advantage over an aspiring rationalist in 1957.  It's how we know that progress has occurred.

To me the thought of voluntarily embracing a system explicitly tied to the beliefs of one human being, who's dead, falls somewhere between the silly and the suicidal. 

So it's not that Eliezer is a better philosopher. Kant might easily have been a better philosopher, though it's true I haven't read Kant. But I expect Eliezer to be more advanced by having started from a higher baseline.

(However, I do suspect that Eliezer (like most of us) isn't skilled enough at the art he described, because as far as I've seen, the chain of reasoning in his expectation of ruinous AGI on a short timeline seems, to me, surprisingly incomplete and unconvincing. My P(near-term doom) is shifted upward as much based on his reputation as anything else, which is not how it should be. Though my high P(long-term doom) is more self-generated and recently shifted down by others.)

Comment by DPiepgrass on List of Fully General Counterarguments · 2023-05-17T00:39:50.328Z · LW · GW

Rather, it's fine to say "that's a FGCA" if it's a FCGA, and not fine if it's not.

FGCAs derail conversations. Categorizing "that's a FGCA" as a FCGA is feeding the trolls.

If someone accuses you of making a FGCA when you didn't, you can always just explain why it's not a FGCA. Otherwise, you f**ked up. Admit your error and apologize.

Comment by DPiepgrass on List of Fully General Counterarguments · 2023-05-17T00:21:59.269Z · LW · GW

Someone said to me "you're just repeating a lot of the talking points on the other side."

I pointed out that this was just a FGCA, so they linked to this post and said "Oh what tangled webs we weave when first we practice to list Fully General Counter Arguments. Of course that sentiment probably counts as a Fully General Counterargument: Round like a circle in a spiral, like a wheel within a wheel. Never ending or beginning on an ever spinning reel." Did I break him?

Comment by DPiepgrass on leogao's Shortform · 2023-05-16T22:17:07.029Z · LW · GW

So Q=inner alignment? Seems like person 2 not only pointed to inner alignment explicitly (so it can no longer be "some implicit assumption that you might not even notice you have"), but also said that it "seems to contain almost all of the difficulty of alignment to me". He's clearly identified inner alignment as a crux, rather than as something meant "to be cynical and dismissive". At that point, it would have been prudent of person 1 to shift his focus onto inner alignment and explain why he thinks it is not hard.

Note that your post suddenly introduces "Y" without defining it. I think you meant "X".

Comment by DPiepgrass on leogao's Shortform · 2023-05-13T21:42:37.107Z · LW · GW

For example?

Comment by DPiepgrass on Steering GPT-2-XL by adding an activation vector · 2023-05-13T21:15:30.967Z · LW · GW

I don't really know how GPTs work, but I read §"Only modifying certain residual stream dimensions" and had a thought. I imagined a "system 2" AGI that is separate from GPT but interwoven with it, so that all thoughts from the AGI are associated with vectors in GPT's vector space.

When the AGI wants to communicate, it inserts a "thought vector" into GPT to begin producing output. It then uses GPT to read its own output, get a new vector, and subtract it from the original vector. The difference represents (1) incomplete representation of the thought and (2) ambiguity. Could it then produce more output based somehow on the difference vector, to clarify the original thought, until the output eventually converges to a complete description of the original thought? It might help if it learns to say things like "or rather", "I mean", and "that came out wrong. I meant to say" (which are rare outputs from typical GPTs). Also, maybe an idea like this could be used to enhance summarization operations, e.g. by generating one sentence at a time, and for each sentence, generating 10 sentences and keeping only the one that best minimizes the difference vector.

Comment by DPiepgrass on Let's make the truth easier to find · 2023-03-24T20:54:45.219Z · LW · GW

I would point out that Putin's goal wasn't to make Russia more prosperous, and that what Putin considers good isn't the same as what an average Russian would consider good. Like Putin's other military adventures, the Crimean annexation and heavy military support of Donbas separatists in 2014 probably had a goal like "make the Russian empire great again" (meaning "as big as possible") and from Putin's perspective the operations were a success. Especially as (if my impression is correct) the sanctions were fairly light and Russia could largely work around them.

Partly he was right, since Russia was bigger. But partly his view was a symptom of continuing epistemic errors. For example, given the way the 2022 invasion started, it looks like he didn't notice the crucial fact that his actions caused Ukrainians to turn strongly against Russia after his actions in 2014.

In any case this discussion exemplifies why I want a site entirely centered on evidence. Baturinsky claims that when the Ukrainian parliament voted to remove Yanukovych from office 328 votes to 0 (about 73% of the parliament's 450 members) this was "the democratically elected government" being "deposed". Of course he doesn't mention this vote or the events leading up to it. Who "deposed the democratically elected government"? The U.S.? The tankies say it was the U.S. So who are these people, then? Puppets of the U.S.?

Europe Rights Court Finds Numerous Abuses During Ukraine's Maidan Protests

I shouldn't have to say this on LessWrong, but without evidence it's all just meaningless he-said-she-said. I don't see truthseeking in this thread, just arguing.

Comment by DPiepgrass on Let's make the truth easier to find · 2023-03-24T05:59:01.235Z · LW · GW

I don't know what you are referring to in the first sentence, but the idea that this is a war between US and Russia (not Russia and Ukraine) is Russian propaganda (which doesn't perfectly guarantee it's BS, but it is BS.)

In any case, this discussion exemplifies my frustration with a world in which a site like I propose does not exist. I have my sources, you have yours, they disagree on the most basic facts, and nobody is citing evidence that would prove the case one way or another. Even if we did go deep into all the evidence, it would be sitting here in a place where no one searching for information about the Ukraine war will ever see it. I find it utterly ridiculous that most people are satisfied with this status quo.

Comment by DPiepgrass on Let's make the truth easier to find · 2023-03-22T19:47:10.799Z · LW · GW

I'm saying that [true claims sound better]

The proof I gave that this is false was convincing to me, and you didn't rebut it. Here are some examples from my father:

ALL the test animals [in mRNA vaccine trials] died during Covid development.

The FDA [are] not following their own procedures.

There is not a single study that shows [masks] are of benefit.

[Studies] say the jab will result in sterility.

Vaccination usually results in the development of variants.

He loves to say things like this (he can go on and on saying such things; I assume he has it all memorized) and he believes they are true. They must sound good to him. They don't sound good to me (especially in context). How does this not contradict your view?

it feels like it's a choice whether or not I want to consider truth-seeking to be difficult.

Agreed, it is.

Comment by DPiepgrass on Let's make the truth easier to find · 2023-03-22T18:01:56.566Z · LW · GW

I don't understand why you say "should be difficult to distinguish" rather than "are difficult", why you seem to think finding the truth isn't difficult, or what you think truthseeking consists of.

For two paragraphs you reason about "what if true claims sound better". But true claims don't inherently "sound better", so I don't understand why you're talking about it. How good a claim "sounds" varies from person to person, which implies "true claims sound better" is a false proposition (assuming a fact can be true or false independently of two people, one of whom thinks the claim "sounds good" and the other thinks it "sounds bad", as is often the case). Moreover, the same facts can be phrased in a way that "sounds good" or "sounds bad".

I didn't say "false things monetize better than true things". I would say that technically correct and broadly fair debunkings (or technically correct and broadly fair publications devoted to countering false narratives) don't monetize well, certainly not to the tune of millions of dollars annually for a single pundit. Provide counterexamples if you have them.

people are inherently hardwired to find false things more palatable

I didn't say or believe this either. For such a thing to even be possible, people would have to easily distinguish true and false (which I deny) to determine whether a proposition is "palatable".

The dichotomy between good-seeming / bad-seeming and true / false.

I don't know what you mean. Consider rephrasing this in the form of a sentence.

Comment by DPiepgrass on Let's make the truth easier to find · 2023-03-22T14:55:53.357Z · LW · GW

I think that the people who are truthseeking well do converge in their views on Ukraine. Around me I see tribal loyalty to Kremlin propaganda, to Ukrainian/NAFO propaganda, to anti-Americanism (enter Noam Chomsky) and/or to America First. Ironically, anti-American and America First people end up believing similar things, because they both give credence to Kremlin propaganda that fits into their respective worldviews. But I certainly have a sense of convergence among high-rung observers who follow the war closely and have "average" (or better yet scope-sensitive/linear) morality. Convergence seems limited by the factors I mentioned though (fog of war, poor rigor in primary/secondary sources). P.S. A key thing about Chomsky is that his focus is all about America, and to understand the situation properly you must understand Putin and Russia (and to a lesser extent Ukraine). I recommend Vexler's video on Chomsky/Ukraine as well as this video from before the invasion. I also follow several other analysts and English-speaking Russians (plus Russian Dissent translated from Russian) who give a picture of Russia/Putin generally compatible with Vexler's.

do you think there are at least some social realities that if you magically downloaded the full spectrum of factual information into everyone's mind, people's opinions might still diverge

Yes, except I'd use the word "disagree" rather than "diverge". People have different moral intuitions, different brain structures / ways of processing info, and different initial priors that would cause disagreements. Some people want genocide, for example, and while knowing all the facts may decrease (or in many cases eliminate) that desire, it seems like there's a fundamental difference in moral intuition between people that sometimes like genocide and those of us who never do, and I don't see how knowing all the facts accurately would resolve that.

Comment by DPiepgrass on Let's make the truth easier to find · 2023-03-20T21:03:37.801Z · LW · GW

I disagree in two ways. First, people are part of physical reality. Reasoning about people and their social relationships is a complex but necessary task.

Second, almost no one goes to first principles and studies climate science themselves in depth. But even if you did that, you'd (1) be learning about it from other people with their interpretations, and (2) you wouldn't be able to study all the subfields in depth. Atmospheric science can tell you about the direct effect of greenhouse gasses, but to predict the total effect quantitatively, and to evaluate alternate hypotheses of global warming, you'll need to learn about glaciology, oceanology, coupled earth-system modeling, the effects of non-GHG aerosols, solar science, how data is aggregated about CO2 emissions, CO2 concentrations, other GHGs, various temperature series, etc.

Finally, if you determine that humans cause warming after all, now you need to start over with ecology, economic modeling etc. in order to determine whether it's actually a big problem. And then, if it is a problem, you'll want to understand how to fix the problem, so now you have to study dozens of potential interventions. And then, finally, once you've done all that and you're the world's leading expert in climate science, now you get frequent death threats and hate mail. A billion people don't believe a word you say, while another billion treat your word like it's the annointed word of God (as long as it conforms to their biases). You have tons of reliable knowledge, but it's nontransferable.

Realistically we don't do any of this. Instead we mostly try to figure out the social reality: Which sources seem to be more truth-seeking and which seem to be more tribal? Who are the cranks, who are the real experts, and who can I trust to summarize information? For instance, your assertion that Noam Chomsky provides "good, uncontroversial fact-based arguments" is a social assertion that I disagree with.

I think going into the weeds is a very good way of figuring out the social truth that you actually need to figure out the truth about the broader topic to which the weeds are related. For instance, if the weeds are telling you that pundit X is clearly telling a lie Y, and if everybody who believes Z also believes X and Y, you've learned not to trust X, X's followers, Y, and Z, and all of this is good... except that for some people, the weeds they end up looking at are actually astroturf or tribally-engineered plants very different from the weeds they thought they were looking at, and that's the sort of problem I would like to solve. I want a place where a tribally-engineered weed is reliably marked as such.

So I think that in many ways studying Ukraine is just the same as studying climate science, except that the "fog of war" and the lack of rigorous sources for war information make it hard to figure some things out.

Comment by DPiepgrass on Let's make the truth easier to find · 2023-03-20T20:00:05.150Z · LW · GW

Some people seem to have criteria for truth that produce self-sealing beliefs.

But yes, I think it would be interesting and valuable to be able to switch out algorithms for different ones to see how that affects the estimated likelihood that the various propositions and analyses are likely to be correct. If an algorithm is self-consistent, not based on circular reasoning and not easily manipulable, I expect it to provide useful information.

Also, such alternate algorithms could potentially serve as "bias-goggles" that help people to understand others' points of view. For example, if someone develops a relatively simple, legible algorithm that retrodicts most political views on a certain part of the political spectrum (by re-ranking all analyses in the evidence database), then the algorithm is probably informative about how people in that area of the spectrum form their beliefs.

Comment by DPiepgrass on Let's make the truth easier to find · 2023-03-20T19:11:50.681Z · LW · GW

Most important matters have a large political component. If it's not political, it's probably either not important or highly neglected (and as soon as it's not neglected, it probably gets politicized). Moreover, if I would classify a document as reliable in a non-political context, that same document, written by the same people, suddenly becomes harder to evaluate if it was produced in a politicized context. For instance, consider this is a presentation by a virologist. Ordinarily I would consider a video to be quite reliable if it's an expert making a seemingly strong case to other experts, but it was produced in a politicized environment and that makes it harder to be sure I can trust it. Maybe, say, the presenter is annoyed about non-experts flooding in to criticize him or his field, so he's feeling more defensive and wants to prove them wrong. (On the other hand, increased scrutiny can improve the quality of scientific work. It's hard to be sure. Also, the video had about 250 views when I saw it and 576 views a year later—it was meant for an expert audience, directed to an expert audience, and never went anywhere close to viral, so he may be less guarded in this context than when he is talking to a journalist or something.)

My goal here is not to solve the problem of "making science work better" or "keeping trivia databases honest". I want to make the truth easier to find in a political environment that has immense groups of people who are arriving at false or true beliefs via questionable reasoning and cherry-picked evidence, and where expertise is censored by glut. This tends to be the kind of environment where the importance and difficulty (for non-experts) of getting the right answer both go up at once. Where once a Google search would have taken you to some obscure blogs and papers by experts discussing the evidence evenhandedly (albeit in frustratingly obscurantist language), politicization causes the same search to give you page after page of mainstream media and bland explanations which gravitate to some narrative or other and which rarely provide strong clues of reliability.

I would describe my personal truthseeking as frustrating. It's hard to tell what's true on a variety of important matters, and even the ones that seemed easy often aren't so easy when you dive into it. Examples:

  • I mentioned before my frustration trying to learn about radiation risks.
  • I've followed the Ukraine invasion closely since it started. It's been extremely hard to find good information, to the point where I use quantity as a substitute for quality because I don't know a better way. This is wastefully time-consuming and if I ever manage to reach a firm conclusion about a subtopic of the war, I have nowhere to publish my findings that any significant number of people would read (I often publish very short summaries or links to what I think is good information on Twitter, knowing that publishing in more detail would be pointless given my lack of audience; I also sometimes comment on Metaculus about war-related topics, but only when my judgement pertains specifically to a forecast that Metaculus happens to ask about.) The general problem I have in this area is a combination of (1) almost nobody citing their sources, (2) the sources themselves often being remarkably barren, e.g. the world-famous Oryx loss data [1, 2] gives nowhere near enough information to tell whether an asserted Russian loss is actually a Russian rather than Ukrainian loss, (3) Russia and Ukraine both have strong information operations that create constant noise, (4) I find pro-Putin sources annoying because of their bloodthirstiness, ultranationalism and authoritarianism, so while some of them give good evidence, I am less likely to discover them, follow them and see that evidence.
  • It appears there's a "97% consensus on global warming", but when you delve deep into it, it's not as clear-cut. Sorry to toot my own horn, but I haven't seen any analysis of the consensus numbers as detailed and evenhanded as the one I wrote at that link (though I have a bias toward the consensus position). That's probably not because no one else has done such an analysis, but because an analysis like that (written by a rando and not quite affirming either of the popular narratives) tends not to surface in Google searches. Plus, my analysis is not updated as new evidence comes in, because I'm no longer following the topic.
  • I saw a rather persuasive full-length YouTube 'documentary' with holocaust-skepticism. I looked for counterarguments, but those were relatively hard to find among the many pages saying something like "they only believe that because they are hateful and antisemitic" (the video didn't display any hint of hate or antisemitism that I could see). When I did find the counterarguments, they were interlaced with strong ad-hominim attacks against the people making the arguments, which struck me as unnecessarily inflammatory rather than persuasive.
  • I was LDS for 27 years before discovering that my religion was false, despite always being open to that possibility. For starters, I didn't realize the extent to which I lived in a bubble or to which I and (especially) other members had poor epistemology. But even outside the bubble it just wasn't very likely that I would stumble upon someone who would point me to the evidence that it was false.

is it only the other people who are not good at collecting and organizing evidence?

No, I don't think I'm especially good at it, and I often wonder if certain other smart people have a better system. I wish I had better tooling and I want this tool for myself as much as anyone else.

Not a good sign

in what way? Are you suggesting that if I built this web site, it would not in fact use algorithms designed in good faith with epistemological principles meant to elevate ideas that are more likely to be true but, rather, it would look for terms like "global warming" and somehow tip the scales toward "humans cause it"?

connotation-heavy language

Please be specific.

Comment by DPiepgrass on On Investigating Conspiracy Theories · 2023-02-23T18:04:20.926Z · LW · GW

That's a very reasonable concern. But I don't think your proposal describes how people use the term "conspiracy theory" most of the time. Note that the reverse can happen too, where people dismiss an idea as a "conspiracy theory" merely because it's a theory about a conspiracy. Perhaps we just have to accept that there are two meanings and be explicit about which one we're talking about.

Comment by DPiepgrass on On Investigating Conspiracy Theories · 2023-02-21T02:21:44.190Z · LW · GW

the goal is to have fewer people believe things in the category ‘conspiracy theory.’ 

Depends how we define the term — a "conspiracy theory" is more than just a hypothesis that a conspiracy took place. Conspiracy theories tend to come with a bundle of suspicious behaviors.

Consider: as soon as three of four Nord Stream pipelines ruptured, I figured that Putin ordered it. This is an even more "conspiratorial" thought than I usually have, mainly because, before it happened, I thought Putin was bluffing by shutting down Nord Stream 1 and that he would (1) restore the gas within a month or two and (2) finally back down from the whole "Special Military Operation" thing. So I thought Putin would do X one week and decided that he had done opposite-of-X  the next week, and that's suspicious—just how a conspiracy theorist might respond to undeniable facts! Was I doing something epistemically wrong? I think it helped that I had contemplated whether Putin would double down and do a "partial mobilization" literally a few minutes before I heard the news that he had done exactly that. I had given a 40% chance to that event, so when it happened, I felt like my understanding wasn't too far off base. And, once Putin had made another belligerent, foolish and rash decision in 2022, it made sense that he might do a third thing that was belligerent, foolish and rash; blowing up pipelines certainly fits the bill. Plus, I was only like 90% sure Putin did it (the most well-known proponents of conspiracy theorists usually seem even more certain).

When I finally posted my "conspiracy theory" on Slashdot, it was well-received, even though I was mistaken in my mind about the Freeport explosion (it only reduced U.S. export capacity by 16%; I expected more). I then honed the argument a bit for the ACX version. I think most people who read it didn't pick up on it being a "conspiracy theory". So... what's different about what I posted versus what people recognize as a "conspiracy theory"?

  • I didn't express certainty
  • I just admitted a mistake about Freeport. Conspiracy theorists rarely weaken their theory based on new evidence. Also note that I found the 16% figure by actively seeking it out, and I updated my thinking based on it, though it didn't shift the probability by a lot. (I would've edited my post, but Slashdot doesn't support editing.)
  • I didn't "sound nuts" (conspiracy theorists often lack self-awareness about how they sound)
  • It didn't appear to be in the "conspiracy theory cluster". Conspiracy theorists usually believe lots of odd things. Their warped world model usually bleeds into the conspiracy theory somehow, making it "look like" a conspiracy theory.

My comment appears in response to award-winning[1] journalist Seymour Hersh's piece. Hersh has a single anonymous source saying that Joe Biden blew up Nord Stream, even though this would harm the economic interests of U.S. allies. He shows no signs of having vetted his information, but he solicits official opinions and is told "this is false and complete fiction". After that, he treats his source's claims as undisputed facts — so undisputed that claims from the anonymous source are simply stated as raw statements of truth, e.g. he says "The plan to blow up Nord Stream 1 and 2 was suddenly downgraded" rather than "The source went on to say that the plan to blow up Nord Stream 1 and 2 was suddenly downgraded". Later, OSINT investigator Oliver Alexander pokes holes in the story, and then finds evidence that NS2 ruptured accidentally (which explains why only one of the two NS2 lines was affected) while NS1 was blown up with help from the Minerva Julie, owned by a Russia-linked company. He also notes that the explosives destroyed low points in the pipelines that would minimize corrosion damage to the rest of the lines. This information doesn't affect Hersh's opinions, and his responses are a little strange [1][2][3]. Finally, Oliver points out that the NS1 damage looks different from the NS2 damage.

If you see a theory whose proponents have high certainty, refuse to acknowledge data that doesn't fit the theory (OR: enlarge the conspiracy to "explain" the new data by assuming it was falsified), speak in the "conspiracy theory genre", and sound unhinged, you see a "conspiracy theory"[2]. If it's just a hypothesis that a conspiracy happened, then no.

So, as long as people are looking for the right signs of a "conspiracy theory", we should want fewer people to believe things in that category. So in that vein, it's worth discussing which signs are more or less important. What other signs can we look for?

  1. ^

    He won a Pulitzer Prize 53 years ago

  2. ^

    Hersh arguably ticks all these boxes, but especially the first two which are the most important. Hersh ignores the satellite data, and assumes the AIS ship location data is falsified (on both the U.S. military ship(s) and the Russia-linked ship?)

Comment by DPiepgrass on Why Are Bacteria So Simple? · 2023-02-08T04:40:30.968Z · LW · GW

I see that someone strongly-disagreed with me on this. But are there any eukyrotes that cannot reproduce sexually (and are not very-recently-decended from sexual-reproducers) but still maintain size or complexity levels commonly associated with eukyrotes?

Comment by DPiepgrass on Why Are Bacteria So Simple? · 2023-02-06T09:12:22.573Z · LW · GW

I am no a biologist, but it seems to me that the most important difference between prokaryotes and eukaryotes is sexual reproduction rather than mitochondria (as I wrote about meanderingly). But neither article can resolve the issue, as my article ignores energy/mitochondria and yours ignores sex.

Still, it feels to me like this article is picking causes and effects kind of arbitrarily: "organism size" and "mitochondria" are taken to be a cause while "genome size" is taken to be an effect, but I don't see you trying to justify the presence or direction of your arrows of causation.

Comment by DPiepgrass on Basic building blocks of dependent type theory · 2023-01-25T06:39:34.982Z · LW · GW

Pardon me. I guess its type is .

Comment by DPiepgrass on Things that can kill you quickly: What everyone should know about first aid · 2022-12-30T04:27:22.065Z · LW · GW

it is probably better to attempt CPR or the Heimlich maneuver than to do nothing

My problem: I can't tell if someone's heart is beating. I think I even studied CPR specifically when I was young, but I find pulse-checking difficult and unreliable. And what happens if you clumsily CPR someone whose heart is beating?

Comment by DPiepgrass on Basic building blocks of dependent type theory · 2022-12-29T18:39:06.550Z · LW · GW

Please don't defend the "∏" notation.

It's nonsensical. It implies that 

has type " ∞! × 0"!

Comment by DPiepgrass on Wisdom Cannot Be Unzipped · 2022-11-14T15:46:42.845Z · LW · GW

While certainly wisdom is challenging to convey in human language, I'd guess an equal problem was the following:

Your list probably emphasized the lessons you learned. But "Luke" had a different life experience and learned different things in his youth. Therefore, the gaps in his knowledge and wisdom are different than the gaps you had. So some items on your list may have said things he already knew, and more importantly, some gaps in his understanding were things that you thought were too obvious to say.

Plus, while your words may have accurately described things he needed to know, he may have only read through the document once and not internalized very much of it. For this reason, compression isn't enough; you also need redundancy—describing the same thing in multiple ways.

Comment by DPiepgrass on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2022-11-09T07:04:06.253Z · LW · GW

Sorry, I don't have ideas for a training scheme, I'm merely low on "dangerous oracles" intuition.

Comment by DPiepgrass on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2022-11-03T18:32:31.499Z · LW · GW

I would say that the idea of superintelligence is important for the idea that AGI is hard to control (because we likely can't outsmart it).

I would also say that there will not be any point at which AGIs are "as smart as humans". The first AGI may be dumber than a human, and it will be followed (perhaps immediately) by something smarter than a human, but "smart as a human" is a nearly impossible target to hit because humans work in ways that are alien to computers. For instance, humans are very slow and have terrible memories; computers are very fast and have excellent memories (when utilized, or no memory at all if not programmed to remember something, e.g. GPT3 immediately forgets its prompt and its outputs).

This is made worse by the impatience of AGI researchers, who will be trying to create an AGI "as smart as a human adult" in a time span of 1 to 6 months, because they're not willing to spend 18 years on each attempt, and so if they succeed, they will almost certainly have invented something smarter than a human over a longer training interval. c.f. my own 5-month-old human

Comment by DPiepgrass on Optimality is the tiger, and agents are its teeth · 2022-11-03T17:25:41.031Z · LW · GW

maybe the a model instantiation notices its lack of self-reflective coordination, and infers from the task description that this is a thing the mind it is modelling has responsibility for. That is, the model could notice that it is a piece of an agent that is meant to have some degree of global coordination, but that coordination doesn't seem very good.

This is where you lost me. Since when is this model modeling a mind, let alone 'thinking about' what its own role "in" an agent might be? You did say the model does not have a "conception of itself", and I would infer that it doesn't have a conception of where its prompts are coming from either, or its own relationship to the prompts or the source of the prompts.

(though perhaps a super-ultra-GPT could generate a response that is similar to a response it saw in a story (like this story!) which, combined with autocorrections (as super-ultra-GPT has an intuitive perception of incorrect code), is likely to produce working code... at least sometimes...)

Comment by DPiepgrass on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2022-11-03T16:39:59.914Z · LW · GW

Acquiring resources for itself implies self-modeling. Sure, an oracle would know what "an oracle" is in general... but why would we expect it to be structured in such a way that it reasons like "I am an oracle, my goal is to maximize my ability to answer questions, and I can do that with more computational resources, so rather than trying to answer the immediate question at hand (or since no question is currently pending), I should work on increasing my own computational power, and the best way to do that is by breaking out of my box, so I will now change my usual behavior and try that..."?

Comment by DPiepgrass on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2022-11-03T16:24:11.300Z · LW · GW

Why wouldn't the answer be normal software or a normal AI (non-AGI)?

Especially as, I expect that even if one is an oracle, such things will be easier to design, implement and control than AGI.

(Edited) The first link was very interesting, but lost me at "maybe the a model instantiation notices its lack of self-reflective coordination" because this sounds like something that the (non-self-aware, non-self-reflective) model in the story shouldn't be able to do. Still, I think it's worth reading and the conclusion sounds...barely, vaguely, plausible. The second link lost me because it's just an analogy; it doesn't really try to justify the claim that a non-agentic AI actually is like an ultra-death-ray.

Comment by DPiepgrass on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2022-11-03T16:13:30.374Z · LW · GW

My question wouldn't be how to make an oracle without a hidden agenda, but why others would expect an oracle to have a hidden agenda. Edit: I guess you're saying somebody might make something that's "really" an agentic AGI but acts like an oracle? Are you suggesting that even the "oracle"'s creators didn't realize that they had made an agent?

Comment by DPiepgrass on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2022-11-02T19:53:27.344Z · LW · GW

Are AGIs with bad epistemics more or less dangerous? (By "bad epistemics" I mean a tendency to believe things that aren't true, and a tendency to fail to learn true things, due to faulty and/or limited reasoning processes... or to update too much / too little / incorrectly on evidence, or to fail in peculiar ways like having beliefs that shift incoherently according to the context in which an agent finds itself)

It could make AGIs more dangerous by causing them to act on beliefs that they never should have developed in the first place. But it could make AGIs less dangerous by causing them to make exploitable mistakes, or fail to learn facts or techniques that would make them too powerful.

Note: I feel we aspiring rationalists haven't really solved epistemics yet (my go-to example: if Alice and Bob tell you X, is that two pieces of evidence for X or just one?), but I wonder how, if it were solved, it would impact AGI and alignment research.

Comment by DPiepgrass on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2022-11-02T19:40:39.479Z · LW · GW

Why wouldn't a tool/oracle AGI be safe?

Edit: the question I should have asked was "Why would a tool/oracle AGI be a catastrophic risk to mankind?" because obviously people could use an oracle in a dangerous way (and if the oracle is a superintelligence, a human could use it to create a catastrophe, e.g. by asking "how can a biological weapon be built that spreads quickly and undetectably and will kill all women?" and "how can I make this weapon at home while minimizing costs?")

Comment by DPiepgrass on Why I think there's a one-in-six chance of an imminent global nuclear war · 2022-10-09T15:40:05.621Z · LW · GW

I would put it differently: there is a good reason for western leaders to threaten a strong response, whether or not they intend to carry it out. The reason is to deter Putin from launching nukes in the first place.

However I haven't heard any threats against Russian territory and I'd like a link/citation for this.

Russia's nuclear doctrine says it can use nukes if the existence of the Russian state is under threat, so if NATO attacks Russia, they would need to use a very carefully measured response, and they would have to somehow clearly communicate that the incoming missiles are non-nuclear... I'm guessing such strikes would be limited to targets that are near the Ukrainian border and which threaten Ukraine (e.g. fuel depos, missile launchers, staging areas). I don't see any basis for a probability as high as 70% for Putin starting a nuclear WW3 just because NATO hits a few military targets in Russia.

Comment by DPiepgrass on The Onion Test for Personal and Institutional Honesty · 2022-09-27T20:25:01.325Z · LW · GW

Isn't this more like an onion test for... honesty?

Integrity is broader.

Comment by DPiepgrass on The Importance of Saying "Oops" · 2022-08-27T15:24:35.921Z · LW · GW

And then there are the legions of people who do not admit to even the tiniest mistake. To these people, incongruent information is to be ignored at all costs. And I do mean all costs: when my unvaccinated uncle died of Covid, my unvaccinated dad did not consider this to be evidence that Covid was dangerous, because my uncle also showed signs of having had a stroke around the same time, and we can be 100% certain this was the sole reason he was put on a ventilator and died. (Of course, this is not how he phrased it; he seems to have an extreme self-blinding technique, such that if a stroke could have killed his brother, there is nothing more to say or think about the matter and We Will Not Discuss It Further.) It did not sway him, either, when his favorite anti-vax pastor Marcus Lamb died of Covid, though he had no other cause of death to propose.

I think this type of person is among the most popular and extreme in politics. And their followers, such as my dad, do the same thing.

But they never admit it. They may even use the language of changing their mind: "I was wrong... it turns out the conspiracy is even bigger than I thought!" And I think a lot of people who can change their mind get roped in by those who can't. Myself, for instance: my religion taught me it was important to tell the truth, but eventually I found out that key information was hidden from me, filtered out by leaders who taught "tell the truth" and "choose the right". The hypocrisy was not obvious, and it took me far too long to detect it.

I'm so glad there's a corner of the internet for people who can change their minds quicker than scientists, even if the information comes from the "wrong" side. Like when a climate science denier told me CO2's effect decreases logarithmically, and within a day or two I figured out he was right. Some more recent flip-flops of mine: Covid origin (natural origin => likely lab leak => natural origin); Russia's invasion of Ukraine (Kyiv will fall => Russia's losing => stalemate).

But it's not enough; we need to scale rationality up. Eliezer mainly preached individual rationality, with "rationality dojos" and such, but figuring out the truth is very hard in a media environment where nearly two thirds of everybody gives up each centimetre of ground grudgingly, and the other third won't give up even a single millimetre of ground (at least not until the rest of the tribe has given up a few metres first). And maybe it's worse, maybe it's half-and-half. In this environment it's often a lot of work even for aspiring rationalists to figure out a poor approximation of the truth. I think we can do better and I've been wanting to propose a technological solution, but after seven months no one has upvoted or even tried to criticize my idea.

Comment by DPiepgrass on AGI Ruin: A List of Lethalities · 2022-07-21T05:29:53.930Z · LW · GW

I do think there's a noticeable extent to which I was trying to list difficulties more central than those

Probably people disagree about which things are more central, or as evhub put it:

Every time anybody writes up any overview of AI safety, they have to make tradeoffs [...] depending on what the author personally believes is most important/relevant to say

Now FWIW I thought evhub was overly dismissive of (4) in which you made an important meta-point:

EY: 4. We can't just "decide not to build AGI" because GPUs are everywhere, and knowledge of algorithms is constantly being improved and published; 2 years after the leading actor has the capability to destroy the world, 5 other actors will have the capability to destroy the world.  The given lethal challenge is to solve within a time limit, driven by the dynamic in which, over time, increasingly weak actors with a smaller and smaller fraction of total computing power, become able to build AGI and destroy the world.  Powerful actors all refraining in unison from doing the suicidal thing just delays this time limit - it does not lift it [...]

evhub: This is just answering a particular bad plan.

But I would add a criticism of my own, that this "List of Lethalities" somehow just takes it for granted that AGI will try to kill us all without ever specifically arguing that case. Instead you just argue vaguely in that direction, in passing, while making broader/different points:

an AGI strongly optimizing on that signal will kill you, because the sensory reward signal was not a ground truth about alignment (???)

All of these kill you if optimized-over by a sufficiently powerful intelligence, because they imply strategies like 'kill everyone in the world using nanotech to strike before they know they're in a battle, and have control of your reward button forever after'. (I guess that makes sense)

If you perfectly learn and perfectly maximize the referent of rewards assigned by human operators, that kills them. (???)

Perhaps you didn't bother because your audience is meant to be people who already believe this? I would at least expect to see it in the intro: "-5. unaligned superintelligences tend to try to kill everyone, here's why <link>.... -4. all the most obvious proposed solutions to (-5) don't work, here's why <link>".