Posts

ACX September Meetup 2023-08-28T17:40:17.023Z
Calgary ACX Meetup 2023-06-16T04:49:47.554Z
Calgary, Alberta, Canada – ACX Meetups Everywhere Spring 2023 2023-04-10T22:01:36.181Z
Let's make the truth easier to find 2023-03-20T04:28:41.405Z
Calgary, AB – ACX Meetups Everywhere 2022 2022-08-24T22:57:16.140Z
Where do you live? 2021-10-31T17:07:31.294Z
Trust and The Small World Fallacy 2021-10-04T00:38:45.208Z
Calgary, AB – ACX Meetups Everywhere 2021 2021-08-23T08:45:03.493Z
Covid vaccine safety: how correct are these allegations? 2021-06-13T03:08:23.858Z

Comments

Comment by DPiepgrass on Sam Altman's sister, Annie Altman, claims Sam has severely abused her · 2023-11-20T22:38:15.309Z · LW · GW

While Annie didn't reply to the "confirm/deny" tweet, she did quote-tweet ittwice:

Wow, thank you. This feels like a study guide version of a big chunk of my therapy discussions. Yes can confirm accuracy. Need some time to process, and then can specify details of what happened with both my Dad and Grandma’s will and trust

Thank you more than words for your time and attention researching. All accurate in the current form, except there was no lawyer connected to the “I’ll give you rent and physical therapy money if you go back on Zoloft”

Comment by DPiepgrass on Sam Altman's sister, Annie Altman, claims Sam has severely abused her · 2023-11-20T22:31:02.089Z · LW · GW

Annie didn't say specifically that Jack sexually abused her, though; her language indicated some unspecified lesser abuse that may or may not have been sexual.

Comment by DPiepgrass on Sam Altman's sister, Annie Altman, claims Sam has severely abused her · 2023-11-20T22:16:44.285Z · LW · GW

Neither Sam nor Annie count as "the outgroup". I'm sure some LWers disagree with Sam about how to manage the development of AGI, but if Sam visited LW I expect it would be a respectful two-way discussion, not a flame war like you'd expect with an "outgroup". (caveat: I don't know how attitudes about Sam will change as a result of the recent drama at OpenAI.)

Comment by DPiepgrass on "Flinching away from truth” is often about *protecting* the epistemology · 2023-09-26T00:11:53.273Z · LW · GW

The teacher looks a bit apologetic, but persists: “‘Ocean’ is spelt with a ‘c’ rather than an ‘sh’; this makes sense, because the ‘e’ after the ‘c’ changes its sound…”

I like how true-to-life this is. In fact it doesn't makes sense, as 'ce' is normally pronounced with 's', not 'sh', so the teacher is unwittingly making this hard for the child. Many such cases. (But also many cases where the teacher's reasoning is flawless and beautiful and instantly rejected.)

This post seems to be about Conflation Fallacies (especially subconscious ones) rather than a new concept involving buckets, so I'm not a big fan of the terminology, but the discussion is important & worthwhile so +1 for that, though it seems like a better title would be '"Flinching away from truth" is often caused by internal conflation" or "bucket errors" if you like.

Comment by DPiepgrass on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2023-09-25T23:37:50.485Z · LW · GW

Though I don't remember people saying explicitly that Eliezer Yudkowsky was a better philosopher than Kant, I would guess many would have said so.

Reminds me of a Yudkowsky quote:

Science isn't fair.  That's sorta the point.  An aspiring rationalist in 2007 starts with a huge advantage over an aspiring rationalist in 1957.  It's how we know that progress has occurred.

To me the thought of voluntarily embracing a system explicitly tied to the beliefs of one human being, who's dead, falls somewhere between the silly and the suicidal. 

So it's not that Eliezer is a better philosopher. Kant might easily have been a better philosopher, though it's true I haven't read Kant. But I expect Eliezer to be more advanced by having started from a higher baseline.

(However, I do suspect that Eliezer (like most of us) isn't skilled enough at the art he described, because as far as I've seen, the chain of reasoning in his expectation of ruinous AGI on a short timeline seems, to me, surprisingly incomplete and unconvincing. My P(near-term doom) is shifted upward as much based on his reputation as anything else, which is not how it should be. Though my high P(long-term doom) is more self-generated and recently shifted down by others.)

Comment by DPiepgrass on List of Fully General Counterarguments · 2023-05-17T00:39:50.328Z · LW · GW

Rather, it's fine to say "that's a FGCA" if it's a FCGA, and not fine if it's not.

FGCAs derail conversations. Categorizing "that's a FGCA" as a FCGA is feeding the trolls.

If someone accuses you of making a FGCA when you didn't, you can always just explain why it's not a FGCA. Otherwise, you f**ked up. Admit your error and apologize.

Comment by DPiepgrass on List of Fully General Counterarguments · 2023-05-17T00:21:59.269Z · LW · GW

Someone said to me "you're just repeating a lot of the talking points on the other side."

I pointed out that this was just a FGCA, so they linked to this post and said "Oh what tangled webs we weave when first we practice to list Fully General Counter Arguments. Of course that sentiment probably counts as a Fully General Counterargument: Round like a circle in a spiral, like a wheel within a wheel. Never ending or beginning on an ever spinning reel." Did I break him?

Comment by DPiepgrass on leogao's Shortform · 2023-05-16T22:17:07.029Z · LW · GW

So Q=inner alignment? Seems like person 2 not only pointed to inner alignment explicitly (so it can no longer be "some implicit assumption that you might not even notice you have"), but also said that it "seems to contain almost all of the difficulty of alignment to me". He's clearly identified inner alignment as a crux, rather than as something meant "to be cynical and dismissive". At that point, it would have been prudent of person 1 to shift his focus onto inner alignment and explain why he thinks it is not hard.

Note that your post suddenly introduces "Y" without defining it. I think you meant "X".

Comment by DPiepgrass on leogao's Shortform · 2023-05-13T21:42:37.107Z · LW · GW

For example?

Comment by DPiepgrass on Steering GPT-2-XL by adding an activation vector · 2023-05-13T21:15:30.967Z · LW · GW

I don't really know how GPTs work, but I read §"Only modifying certain residual stream dimensions" and had a thought. I imagined a "system 2" AGI that is separate from GPT but interwoven with it, so that all thoughts from the AGI are associated with vectors in GPT's vector space.

When the AGI wants to communicate, it inserts a "thought vector" into GPT to begin producing output. It then uses GPT to read its own output, get a new vector, and subtract it from the original vector. The difference represents (1) incomplete representation of the thought and (2) ambiguity. Could it then produce more output based somehow on the difference vector, to clarify the original thought, until the output eventually converges to a complete description of the original thought? It might help if it learns to say things like "or rather", "I mean", and "that came out wrong. I meant to say" (which are rare outputs from typical GPTs). Also, maybe an idea like this could be used to enhance summarization operations, e.g. by generating one sentence at a time, and for each sentence, generating 10 sentences and keeping only the one that best minimizes the difference vector.

Comment by DPiepgrass on Let's make the truth easier to find · 2023-03-24T20:54:45.219Z · LW · GW

I would point out that Putin's goal wasn't to make Russia more prosperous, and that what Putin considers good isn't the same as what an average Russian would consider good. Like Putin's other military adventures, the Crimean annexation and heavy military support of Donbas separatists in 2014 probably had a goal like "make the Russian empire great again" (meaning "as big as possible") and from Putin's perspective the operations were a success. Especially as (if my impression is correct) the sanctions were fairly light and Russia could largely work around them.

Partly he was right, since Russia was bigger. But partly his view was a symptom of continuing epistemic errors. For example, given the way the 2022 invasion started, it looks like he didn't notice the crucial fact that his actions caused Ukrainians to turn strongly against Russia after his actions in 2014.

In any case this discussion exemplifies why I want a site entirely centered on evidence. Baturinsky claims that when the Ukrainian parliament voted to remove Yanukovych from office 328 votes to 0 (about 73% of the parliament's 450 members) this was "the democratically elected government" being "deposed". Of course he doesn't mention this vote or the events leading up to it. Who "deposed the democratically elected government"? The U.S.? The tankies say it was the U.S. So who are these people, then? Puppets of the U.S.?

Europe Rights Court Finds Numerous Abuses During Ukraine's Maidan Protests

I shouldn't have to say this on LessWrong, but without evidence it's all just meaningless he-said-she-said. I don't see truthseeking in this thread, just arguing.

Comment by DPiepgrass on Let's make the truth easier to find · 2023-03-24T05:59:01.235Z · LW · GW

I don't know what you are referring to in the first sentence, but the idea that this is a war between US and Russia (not Russia and Ukraine) is Russian propaganda (which doesn't perfectly guarantee it's BS, but it is BS.)

In any case, this discussion exemplifies my frustration with a world in which a site like I propose does not exist. I have my sources, you have yours, they disagree on the most basic facts, and nobody is citing evidence that would prove the case one way or another. Even if we did go deep into all the evidence, it would be sitting here in a place where no one searching for information about the Ukraine war will ever see it. I find it utterly ridiculous that most people are satisfied with this status quo.

Comment by DPiepgrass on Let's make the truth easier to find · 2023-03-22T19:47:10.799Z · LW · GW

I'm saying that [true claims sound better]

The proof I gave that this is false was convincing to me, and you didn't rebut it. Here are some examples from my father:

ALL the test animals [in mRNA vaccine trials] died during Covid development.

The FDA [are] not following their own procedures.

There is not a single study that shows [masks] are of benefit.

[Studies] say the jab will result in sterility.

Vaccination usually results in the development of variants.

He loves to say things like this (he can go on and on saying such things; I assume he has it all memorized) and he believes they are true. They must sound good to him. They don't sound good to me (especially in context). How does this not contradict your view?

it feels like it's a choice whether or not I want to consider truth-seeking to be difficult.

Agreed, it is.

Comment by DPiepgrass on Let's make the truth easier to find · 2023-03-22T18:01:56.566Z · LW · GW

I don't understand why you say "should be difficult to distinguish" rather than "are difficult", why you seem to think finding the truth isn't difficult, or what you think truthseeking consists of.

For two paragraphs you reason about "what if true claims sound better". But true claims don't inherently "sound better", so I don't understand why you're talking about it. How good a claim "sounds" varies from person to person, which implies "true claims sound better" is a false proposition (assuming a fact can be true or false independently of two people, one of whom thinks the claim "sounds good" and the other thinks it "sounds bad", as is often the case). Moreover, the same facts can be phrased in a way that "sounds good" or "sounds bad".

I didn't say "false things monetize better than true things". I would say that technically correct and broadly fair debunkings (or technically correct and broadly fair publications devoted to countering false narratives) don't monetize well, certainly not to the tune of millions of dollars annually for a single pundit. Provide counterexamples if you have them.

people are inherently hardwired to find false things more palatable

I didn't say or believe this either. For such a thing to even be possible, people would have to easily distinguish true and false (which I deny) to determine whether a proposition is "palatable".

The dichotomy between good-seeming / bad-seeming and true / false.

I don't know what you mean. Consider rephrasing this in the form of a sentence.

Comment by DPiepgrass on Let's make the truth easier to find · 2023-03-22T14:55:53.357Z · LW · GW

I think that the people who are truthseeking well do converge in their views on Ukraine. Around me I see tribal loyalty to Kremlin propaganda, to Ukrainian/NAFO propaganda, to anti-Americanism (enter Noam Chomsky) and/or to America First. Ironically, anti-American and America First people end up believing similar things, because they both give credence to Kremlin propaganda that fits into their respective worldviews. But I certainly have a sense of convergence among high-rung observers who follow the war closely and have "average" (or better yet scope-sensitive/linear) morality. Convergence seems limited by the factors I mentioned though (fog of war, poor rigor in primary/secondary sources). P.S. A key thing about Chomsky is that his focus is all about America, and to understand the situation properly you must understand Putin and Russia (and to a lesser extent Ukraine). I recommend Vexler's video on Chomsky/Ukraine as well as this video from before the invasion. I also follow several other analysts and English-speaking Russians (plus Russian Dissent translated from Russian) who give a picture of Russia/Putin generally compatible with Vexler's.

do you think there are at least some social realities that if you magically downloaded the full spectrum of factual information into everyone's mind, people's opinions might still diverge

Yes, except I'd use the word "disagree" rather than "diverge". People have different moral intuitions, different brain structures / ways of processing info, and different initial priors that would cause disagreements. Some people want genocide, for example, and while knowing all the facts may decrease (or in many cases eliminate) that desire, it seems like there's a fundamental difference in moral intuition between people that sometimes like genocide and those of us who never do, and I don't see how knowing all the facts accurately would resolve that.

Comment by DPiepgrass on Let's make the truth easier to find · 2023-03-20T21:03:37.801Z · LW · GW

I disagree in two ways. First, people are part of physical reality. Reasoning about people and their social relationships is a complex but necessary task.

Second, almost no one goes to first principles and studies climate science themselves in depth. But even if you did that, you'd (1) be learning about it from other people with their interpretations, and (2) you wouldn't be able to study all the subfields in depth. Atmospheric science can tell you about the direct effect of greenhouse gasses, but to predict the total effect quantitatively, and to evaluate alternate hypotheses of global warming, you'll need to learn about glaciology, oceanology, coupled earth-system modeling, the effects of non-GHG aerosols, solar science, how data is aggregated about CO2 emissions, CO2 concentrations, other GHGs, various temperature series, etc.

Finally, if you determine that humans cause warming after all, now you need to start over with ecology, economic modeling etc. in order to determine whether it's actually a big problem. And then, if it is a problem, you'll want to understand how to fix the problem, so now you have to study dozens of potential interventions. And then, finally, once you've done all that and you're the world's leading expert in climate science, now you get frequent death threats and hate mail. A billion people don't believe a word you say, while another billion treat your word like it's the annointed word of God (as long as it conforms to their biases). You have tons of reliable knowledge, but it's nontransferable.

Realistically we don't do any of this. Instead we mostly try to figure out the social reality: Which sources seem to be more truth-seeking and which seem to be more tribal? Who are the cranks, who are the real experts, and who can I trust to summarize information? For instance, your assertion that Noam Chomsky provides "good, uncontroversial fact-based arguments" is a social assertion that I disagree with.

I think going into the weeds is a very good way of figuring out the social truth that you actually need to figure out the truth about the broader topic to which the weeds are related. For instance, if the weeds are telling you that pundit X is clearly telling a lie Y, and if everybody who believes Z also believes X and Y, you've learned not to trust X, X's followers, Y, and Z, and all of this is good... except that for some people, the weeds they end up looking at are actually astroturf or tribally-engineered plants very different from the weeds they thought they were looking at, and that's the sort of problem I would like to solve. I want a place where a tribally-engineered weed is reliably marked as such.

So I think that in many ways studying Ukraine is just the same as studying climate science, except that the "fog of war" and the lack of rigorous sources for war information make it hard to figure some things out.

Comment by DPiepgrass on Let's make the truth easier to find · 2023-03-20T20:00:05.150Z · LW · GW

Some people seem to have criteria for truth that produce self-sealing beliefs.

But yes, I think it would be interesting and valuable to be able to switch out algorithms for different ones to see how that affects the estimated likelihood that the various propositions and analyses are likely to be correct. If an algorithm is self-consistent, not based on circular reasoning and not easily manipulable, I expect it to provide useful information.

Also, such alternate algorithms could potentially serve as "bias-goggles" that help people to understand others' points of view. For example, if someone develops a relatively simple, legible algorithm that retrodicts most political views on a certain part of the political spectrum (by re-ranking all analyses in the evidence database), then the algorithm is probably informative about how people in that area of the spectrum form their beliefs.

Comment by DPiepgrass on Let's make the truth easier to find · 2023-03-20T19:11:50.681Z · LW · GW

Most important matters have a large political component. If it's not political, it's probably either not important or highly neglected (and as soon as it's not neglected, it probably gets politicized). Moreover, if I would classify a document as reliable in a non-political context, that same document, written by the same people, suddenly becomes harder to evaluate if it was produced in a politicized context. For instance, consider this is a presentation by a virologist. Ordinarily I would consider a video to be quite reliable if it's an expert making a seemingly strong case to other experts, but it was produced in a politicized environment and that makes it harder to be sure I can trust it. Maybe, say, the presenter is annoyed about non-experts flooding in to criticize him or his field, so he's feeling more defensive and wants to prove them wrong. (On the other hand, increased scrutiny can improve the quality of scientific work. It's hard to be sure. Also, the video had about 250 views when I saw it and 576 views a year later—it was meant for an expert audience, directed to an expert audience, and never went anywhere close to viral, so he may be less guarded in this context than when he is talking to a journalist or something.)

My goal here is not to solve the problem of "making science work better" or "keeping trivia databases honest". I want to make the truth easier to find in a political environment that has immense groups of people who are arriving at false or true beliefs via questionable reasoning and cherry-picked evidence, and where expertise is censored by glut. This tends to be the kind of environment where the importance and difficulty (for non-experts) of getting the right answer both go up at once. Where once a Google search would have taken you to some obscure blogs and papers by experts discussing the evidence evenhandedly (albeit in frustratingly obscurantist language), politicization causes the same search to give you page after page of mainstream media and bland explanations which gravitate to some narrative or other and which rarely provide strong clues of reliability.

I would describe my personal truthseeking as frustrating. It's hard to tell what's true on a variety of important matters, and even the ones that seemed easy often aren't so easy when you dive into it. Examples:

  • I mentioned before my frustration trying to learn about radiation risks.
  • I've followed the Ukraine invasion closely since it started. It's been extremely hard to find good information, to the point where I use quantity as a substitute for quality because I don't know a better way. This is wastefully time-consuming and if I ever manage to reach a firm conclusion about a subtopic of the war, I have nowhere to publish my findings that any significant number of people would read (I often publish very short summaries or links to what I think is good information on Twitter, knowing that publishing in more detail would be pointless given my lack of audience; I also sometimes comment on Metaculus about war-related topics, but only when my judgement pertains specifically to a forecast that Metaculus happens to ask about.) The general problem I have in this area is a combination of (1) almost nobody citing their sources, (2) the sources themselves often being remarkably barren, e.g. the world-famous Oryx loss data [1, 2] gives nowhere near enough information to tell whether an asserted Russian loss is actually a Russian rather than Ukrainian loss, (3) Russia and Ukraine both have strong information operations that create constant noise, (4) I find pro-Putin sources annoying because of their bloodthirstiness, ultranationalism and authoritarianism, so while some of them give good evidence, I am less likely to discover them, follow them and see that evidence.
  • It appears there's a "97% consensus on global warming", but when you delve deep into it, it's not as clear-cut. Sorry to toot my own horn, but I haven't seen any analysis of the consensus numbers as detailed and evenhanded as the one I wrote at that link (though I have a bias toward the consensus position). That's probably not because no one else has done such an analysis, but because an analysis like that (written by a rando and not quite affirming either of the popular narratives) tends not to surface in Google searches. Plus, my analysis is not updated as new evidence comes in, because I'm no longer following the topic.
  • I saw a rather persuasive full-length YouTube 'documentary' with holocaust-skepticism. I looked for counterarguments, but those were relatively hard to find among the many pages saying something like "they only believe that because they are hateful and antisemitic" (the video didn't display any hint of hate or antisemitism that I could see). When I did find the counterarguments, they were interlaced with strong ad-hominim attacks against the people making the arguments, which struck me as unnecessarily inflammatory rather than persuasive.
  • I was LDS for 27 years before discovering that my religion was false, despite always being open to that possibility. For starters, I didn't realize the extent to which I lived in a bubble or to which I and (especially) other members had poor epistemology. But even outside the bubble it just wasn't very likely that I would stumble upon someone who would point me to the evidence that it was false.

is it only the other people who are not good at collecting and organizing evidence?

No, I don't think I'm especially good at it, and I often wonder if certain other smart people have a better system. I wish I had better tooling and I want this tool for myself as much as anyone else.

Not a good sign

in what way? Are you suggesting that if I built this web site, it would not in fact use algorithms designed in good faith with epistemological principles meant to elevate ideas that are more likely to be true but, rather, it would look for terms like "global warming" and somehow tip the scales toward "humans cause it"?

connotation-heavy language

Please be specific.

Comment by DPiepgrass on On Investigating Conspiracy Theories · 2023-02-23T18:04:20.926Z · LW · GW

That's a very reasonable concern. But I don't think your proposal describes how people use the term "conspiracy theory" most of the time. Note that the reverse can happen too, where people dismiss an idea as a "conspiracy theory" merely because it's a theory about a conspiracy. Perhaps we just have to accept that there are two meanings and be explicit about which one we're talking about.

Comment by DPiepgrass on On Investigating Conspiracy Theories · 2023-02-21T02:21:44.190Z · LW · GW

the goal is to have fewer people believe things in the category ‘conspiracy theory.’ 

Depends how we define the term — a "conspiracy theory" is more than just a hypothesis that a conspiracy took place. Conspiracy theories tend to come with a bundle of suspicious behaviors.

Consider: as soon as three of four Nord Stream pipelines ruptured, I figured that Putin ordered it. This is an even more "conspiratorial" thought than I usually have, mainly because, before it happened, I thought Putin was bluffing by shutting down Nord Stream 1 and that he would (1) restore the gas within a month or two and (2) finally back down from the whole "Special Military Operation" thing. So I thought Putin would do X one week and decided that he had done opposite-of-X  the next week, and that's suspicious—just how a conspiracy theorist might respond to undeniable facts! Was I doing something epistemically wrong? I think it helped that I had contemplated whether Putin would double down and do a "partial mobilization" literally a few minutes before I heard the news that he had done exactly that. I had given a 40% chance to that event, so when it happened, I felt like my understanding wasn't too far off base. And, once Putin had made another belligerent, foolish and rash decision in 2022, it made sense that he might do a third thing that was belligerent, foolish and rash; blowing up pipelines certainly fits the bill. Plus, I was only like 90% sure Putin did it (the most well-known proponents of conspiracy theorists usually seem even more certain).

When I finally posted my "conspiracy theory" on Slashdot, it was well-received, even though I was mistaken in my mind about the Freeport explosion (it only reduced U.S. export capacity by 16%; I expected more). I then honed the argument a bit for the ACX version. I think most people who read it didn't pick up on it being a "conspiracy theory". So... what's different about what I posted versus what people recognize as a "conspiracy theory"?

  • I didn't express certainty
  • I just admitted a mistake about Freeport. Conspiracy theorists rarely weaken their theory based on new evidence. Also note that I found the 16% figure by actively seeking it out, and I updated my thinking based on it, though it didn't shift the probability by a lot. (I would've edited my post, but Slashdot doesn't support editing.)
  • I didn't "sound nuts" (conspiracy theorists often lack self-awareness about how they sound)
  • It didn't appear to be in the "conspiracy theory cluster". Conspiracy theorists usually believe lots of odd things. Their warped world model usually bleeds into the conspiracy theory somehow, making it "look like" a conspiracy theory.

My comment appears in response to award-winning[1] journalist Seymour Hersh's piece. Hersh has a single anonymous source saying that Joe Biden blew up Nord Stream, even though this would harm the economic interests of U.S. allies. He shows no signs of having vetted his information, but he solicits official opinions and is told "this is false and complete fiction". After that, he treats his source's claims as undisputed facts — so undisputed that claims from the anonymous source are simply stated as raw statements of truth, e.g. he says "The plan to blow up Nord Stream 1 and 2 was suddenly downgraded" rather than "The source went on to say that the plan to blow up Nord Stream 1 and 2 was suddenly downgraded". Later, OSINT investigator Oliver Alexander pokes holes in the story, and then finds evidence that NS2 ruptured accidentally (which explains why only one of the two NS2 lines was affected) while NS1 was blown up with help from the Minerva Julie, owned by a Russia-linked company. He also notes that the explosives destroyed low points in the pipelines that would minimize corrosion damage to the rest of the lines. This information doesn't affect Hersh's opinions, and his responses are a little strange [1][2][3]. Finally, Oliver points out that the NS1 damage looks different from the NS2 damage.

If you see a theory whose proponents have high certainty, refuse to acknowledge data that doesn't fit the theory (OR: enlarge the conspiracy to "explain" the new data by assuming it was falsified), speak in the "conspiracy theory genre", and sound unhinged, you see a "conspiracy theory"[2]. If it's just a hypothesis that a conspiracy happened, then no.

So, as long as people are looking for the right signs of a "conspiracy theory", we should want fewer people to believe things in that category. So in that vein, it's worth discussing which signs are more or less important. What other signs can we look for?

  1. ^

    He won a Pulitzer Prize 53 years ago

  2. ^

    Hersh arguably ticks all these boxes, but especially the first two which are the most important. Hersh ignores the satellite data, and assumes the AIS ship location data is falsified (on both the U.S. military ship(s) and the Russia-linked ship?)

Comment by DPiepgrass on Why Are Bacteria So Simple? · 2023-02-08T04:40:30.968Z · LW · GW

I see that someone strongly-disagreed with me on this. But are there any eukyrotes that cannot reproduce sexually (and are not very-recently-decended from sexual-reproducers) but still maintain size or complexity levels commonly associated with eukyrotes?

Comment by DPiepgrass on Why Are Bacteria So Simple? · 2023-02-06T09:12:22.573Z · LW · GW

I am no a biologist, but it seems to me that the most important difference between prokaryotes and eukaryotes is sexual reproduction rather than mitochondria (as I wrote about meanderingly). But neither article can resolve the issue, as my article ignores energy/mitochondria and yours ignores sex.

Still, it feels to me like this article is picking causes and effects kind of arbitrarily: "organism size" and "mitochondria" are taken to be a cause while "genome size" is taken to be an effect, but I don't see you trying to justify the presence or direction of your arrows of causation.

Comment by DPiepgrass on Basic building blocks of dependent type theory · 2023-01-25T06:39:34.982Z · LW · GW

Pardon me. I guess its type is .

Comment by DPiepgrass on Things that can kill you quickly: What everyone should know about first aid · 2022-12-30T04:27:22.065Z · LW · GW

it is probably better to attempt CPR or the Heimlich maneuver than to do nothing

My problem: I can't tell if someone's heart is beating. I think I even studied CPR specifically when I was young, but I find pulse-checking difficult and unreliable. And what happens if you clumsily CPR someone whose heart is beating?

Comment by DPiepgrass on Basic building blocks of dependent type theory · 2022-12-29T18:39:06.550Z · LW · GW

Please don't defend the "∏" notation.

It's nonsensical. It implies that 

has type " ∞! × 0"!

Comment by DPiepgrass on Wisdom Cannot Be Unzipped · 2022-11-14T15:46:42.845Z · LW · GW

While certainly wisdom is challenging to convey in human language, I'd guess an equal problem was the following:

Your list probably emphasized the lessons you learned. But "Luke" had a different life experience and learned different things in his youth. Therefore, the gaps in his knowledge and wisdom are different than the gaps you had. So some items on your list may have said things he already knew, and more importantly, some gaps in his understanding were things that you thought were too obvious to say.

Plus, while your words may have accurately described things he needed to know, he may have only read through the document once and not internalized very much of it. For this reason, compression isn't enough; you also need redundancy—describing the same thing in multiple ways.

Comment by DPiepgrass on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2022-11-09T07:04:06.253Z · LW · GW

Sorry, I don't have ideas for a training scheme, I'm merely low on "dangerous oracles" intuition.

Comment by DPiepgrass on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2022-11-03T18:32:31.499Z · LW · GW

I would say that the idea of superintelligence is important for the idea that AGI is hard to control (because we likely can't outsmart it).

I would also say that there will not be any point at which AGIs are "as smart as humans". The first AGI may be dumber than a human, and it will be followed (perhaps immediately) by something smarter than a human, but "smart as a human" is a nearly impossible target to hit because humans work in ways that are alien to computers. For instance, humans are very slow and have terrible memories; computers are very fast and have excellent memories (when utilized, or no memory at all if not programmed to remember something, e.g. GPT3 immediately forgets its prompt and its outputs).

This is made worse by the impatience of AGI researchers, who will be trying to create an AGI "as smart as a human adult" in a time span of 1 to 6 months, because they're not willing to spend 18 years on each attempt, and so if they succeed, they will almost certainly have invented something smarter than a human over a longer training interval. c.f. my own 5-month-old human

Comment by DPiepgrass on Optimality is the tiger, and agents are its teeth · 2022-11-03T17:25:41.031Z · LW · GW

maybe the a model instantiation notices its lack of self-reflective coordination, and infers from the task description that this is a thing the mind it is modelling has responsibility for. That is, the model could notice that it is a piece of an agent that is meant to have some degree of global coordination, but that coordination doesn't seem very good.

This is where you lost me. Since when is this model modeling a mind, let alone 'thinking about' what its own role "in" an agent might be? You did say the model does not have a "conception of itself", and I would infer that it doesn't have a conception of where its prompts are coming from either, or its own relationship to the prompts or the source of the prompts.

(though perhaps a super-ultra-GPT could generate a response that is similar to a response it saw in a story (like this story!) which, combined with autocorrections (as super-ultra-GPT has an intuitive perception of incorrect code), is likely to produce working code... at least sometimes...)

Comment by DPiepgrass on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2022-11-03T16:39:59.914Z · LW · GW

Acquiring resources for itself implies self-modeling. Sure, an oracle would know what "an oracle" is in general... but why would we expect it to be structured in such a way that it reasons like "I am an oracle, my goal is to maximize my ability to answer questions, and I can do that with more computational resources, so rather than trying to answer the immediate question at hand (or since no question is currently pending), I should work on increasing my own computational power, and the best way to do that is by breaking out of my box, so I will now change my usual behavior and try that..."?

Comment by DPiepgrass on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2022-11-03T16:24:11.300Z · LW · GW

Why wouldn't the answer be normal software or a normal AI (non-AGI)?

Especially as, I expect that even if one is an oracle, such things will be easier to design, implement and control than AGI.

(Edited) The first link was very interesting, but lost me at "maybe the a model instantiation notices its lack of self-reflective coordination" because this sounds like something that the (non-self-aware, non-self-reflective) model in the story shouldn't be able to do. Still, I think it's worth reading and the conclusion sounds...barely, vaguely, plausible. The second link lost me because it's just an analogy; it doesn't really try to justify the claim that a non-agentic AI actually is like an ultra-death-ray.

Comment by DPiepgrass on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2022-11-03T16:13:30.374Z · LW · GW

My question wouldn't be how to make an oracle without a hidden agenda, but why others would expect an oracle to have a hidden agenda. Edit: I guess you're saying somebody might make something that's "really" an agentic AGI but acts like an oracle? Are you suggesting that even the "oracle"'s creators didn't realize that they had made an agent?

Comment by DPiepgrass on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2022-11-02T19:53:27.344Z · LW · GW

Are AGIs with bad epistemics more or less dangerous? (By "bad epistemics" I mean a tendency to believe things that aren't true, and a tendency to fail to learn true things, due to faulty and/or limited reasoning processes... or to update too much / too little / incorrectly on evidence, or to fail in peculiar ways like having beliefs that shift incoherently according to the context in which an agent finds itself)

It could make AGIs more dangerous by causing them to act on beliefs that they never should have developed in the first place. But it could make AGIs less dangerous by causing them to make exploitable mistakes, or fail to learn facts or techniques that would make them too powerful.

Note: I feel we aspiring rationalists haven't really solved epistemics yet (my go-to example: if Alice and Bob tell you X, is that two pieces of evidence for X or just one?), but I wonder how, if it were solved, it would impact AGI and alignment research.

Comment by DPiepgrass on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2022-11-02T19:40:39.479Z · LW · GW

Why wouldn't a tool/oracle AGI be safe?

Edit: the question I should have asked was "Why would a tool/oracle AGI be a catastrophic risk to mankind?" because obviously people could use an oracle in a dangerous way (and if the oracle is a superintelligence, a human could use it to create a catastrophe, e.g. by asking "how can a biological weapon be built that spreads quickly and undetectably and will kill all women?" and "how can I make this weapon at home while minimizing costs?")

Comment by DPiepgrass on Why I think there's a one-in-six chance of an imminent global nuclear war · 2022-10-09T15:40:05.621Z · LW · GW

I would put it differently: there is a good reason for western leaders to threaten a strong response, whether or not they intend to carry it out. The reason is to deter Putin from launching nukes in the first place.

However I haven't heard any threats against Russian territory and I'd like a link/citation for this.

Russia's nuclear doctrine says it can use nukes if the existence of the Russian state is under threat, so if NATO attacks Russia, they would need to use a very carefully measured response, and they would have to somehow clearly communicate that the incoming missiles are non-nuclear... I'm guessing such strikes would be limited to targets that are near the Ukrainian border and which threaten Ukraine (e.g. fuel depos, missile launchers, staging areas). I don't see any basis for a probability as high as 70% for Putin starting a nuclear WW3 just because NATO hits a few military targets in Russia.

Comment by DPiepgrass on The Onion Test for Personal and Institutional Honesty · 2022-09-27T20:25:01.325Z · LW · GW

Isn't this more like an onion test for... honesty?

Integrity is broader.

Comment by DPiepgrass on The Importance of Saying "Oops" · 2022-08-27T15:24:35.921Z · LW · GW

And then there are the legions of people who do not admit to even the tiniest mistake. To these people, incongruent information is to be ignored at all costs. And I do mean all costs: when my unvaccinated uncle died of Covid, my unvaccinated dad did not consider this to be evidence that Covid was dangerous, because my uncle also showed signs of having had a stroke around the same time, and we can be 100% certain this was the sole reason he was put on a ventilator and died. (Of course, this is not how he phrased it; he seems to have an extreme self-blinding technique, such that if a stroke could have killed his brother, there is nothing more to say or think about the matter and We Will Not Discuss It Further.) It did not sway him, either, when his favorite anti-vax pastor Marcus Lamb died of Covid, though he had no other cause of death to propose.

I think this type of person is among the most popular and extreme in politics. And their followers, such as my dad, do the same thing.

But they never admit it. They may even use the language of changing their mind: "I was wrong... it turns out the conspiracy is even bigger than I thought!" And I think a lot of people who can change their mind get roped in by those who can't. Myself, for instance: my religion taught me it was important to tell the truth, but eventually I found out that key information was hidden from me, filtered out by leaders who taught "tell the truth" and "choose the right". The hypocrisy was not obvious, and it took me far too long to detect it.

I'm so glad there's a corner of the internet for people who can change their minds quicker than scientists, even if the information comes from the "wrong" side. Like when a climate science denier told me CO2's effect decreases logarithmically, and within a day or two I figured out he was right. Some more recent flip-flops of mine: Covid origin (natural origin => likely lab leak => natural origin); Russia's invasion of Ukraine (Kyiv will fall => Russia's losing => stalemate).

But it's not enough; we need to scale rationality up. Eliezer mainly preached individual rationality, with "rationality dojos" and such, but figuring out the truth is very hard in a media environment where nearly two thirds of everybody gives up each centimetre of ground grudgingly, and the other third won't give up even a single millimetre of ground (at least not until the rest of the tribe has given up a few metres first). And maybe it's worse, maybe it's half-and-half. In this environment it's often a lot of work even for aspiring rationalists to figure out a poor approximation of the truth. I think we can do better and I've been wanting to propose a technological solution, but after seven months no one has upvoted or even tried to criticize my idea.

Comment by DPiepgrass on AGI Ruin: A List of Lethalities · 2022-07-21T05:29:53.930Z · LW · GW

I do think there's a noticeable extent to which I was trying to list difficulties more central than those

Probably people disagree about which things are more central, or as evhub put it:

Every time anybody writes up any overview of AI safety, they have to make tradeoffs [...] depending on what the author personally believes is most important/relevant to say

Now FWIW I thought evhub was overly dismissive of (4) in which you made an important meta-point:

EY: 4. We can't just "decide not to build AGI" because GPUs are everywhere, and knowledge of algorithms is constantly being improved and published; 2 years after the leading actor has the capability to destroy the world, 5 other actors will have the capability to destroy the world.  The given lethal challenge is to solve within a time limit, driven by the dynamic in which, over time, increasingly weak actors with a smaller and smaller fraction of total computing power, become able to build AGI and destroy the world.  Powerful actors all refraining in unison from doing the suicidal thing just delays this time limit - it does not lift it [...]

evhub: This is just answering a particular bad plan.

But I would add a criticism of my own, that this "List of Lethalities" somehow just takes it for granted that AGI will try to kill us all without ever specifically arguing that case. Instead you just argue vaguely in that direction, in passing, while making broader/different points:

an AGI strongly optimizing on that signal will kill you, because the sensory reward signal was not a ground truth about alignment (???)

All of these kill you if optimized-over by a sufficiently powerful intelligence, because they imply strategies like 'kill everyone in the world using nanotech to strike before they know they're in a battle, and have control of your reward button forever after'. (I guess that makes sense)

If you perfectly learn and perfectly maximize the referent of rewards assigned by human operators, that kills them. (???)

Perhaps you didn't bother because your audience is meant to be people who already believe this? I would at least expect to see it in the intro: "-5. unaligned superintelligences tend to try to kill everyone, here's why <link>.... -4. all the most obvious proposed solutions to (-5) don't work, here's why <link>".

Comment by DPiepgrass on Any LessWrongers in Calgary? · 2022-07-16T07:02:25.496Z · LW · GW

I'm in Calgary ... but my money is on you having left by now! Regardless, it looks like you long since left LW.

Comment by DPiepgrass on [deleted post] 2022-07-15T05:11:20.508Z

The idea that "whiteboards or PowerPoint slides" should be singled out as a particularly persuasive dark arts technology ... sounds ridiculous to me. I can't recall the last time a politician or climate dismissive tried to convince me with either of these, but I did see a professionally done anti-vax video on Rumble once. Indeed, things like bar charts, high-quality animations, repetitive ads with slogans, or certain subreddits (in which you would quickly be banned for presenting certain facts) all come to mind before powerpoints.

I'm not sure "dark arts technology" even belongs on the tag page for "dark arts".

Comment by DPiepgrass on It’s Probably Not Lithium · 2022-06-29T18:34:46.517Z · LW · GW

I was thinking about comment approval more than response [and to make that clearer, I appended to my quotation above]. I've been perma-declined myself, not fun. Unfortunately if it's approved now there will be a question as to whether it was approved now in response to an ultra-popular LW post.

Comment by DPiepgrass on It’s Probably Not Lithium · 2022-06-29T13:26:17.968Z · LW · GW

there's really no mystery in Americans getting fatter if we condition on the trajectory of mean calorie intake. Mean calorie intake has gone up by about 20% from 1970 to 2010 in the US, and mean body weight apparently went up by around 15% in the same period.

I feel like you've missed SMTM's central point. Sure, people are eating more. The main question is why people eat more. For example, I used to weigh 172 pounds in the Philippines 4 years ago; now I weigh about 192 in Canada. I used to think my overweight best friend was overfeeding me (well, he did), but since he moved away two years ago, I've actually gained weight somehow. I have a mild sense of being hungrier here. Presumably I am eating more, but why?

(Having said that, it looks like OP has done great work and this is a big red flag:)

I have attempted to make a comment on SMTM’s post linking to many of those studies, but they have not approved the comment. I have also attempted to contact them on Twitter (twice) and through email, but have not received a reply. All of this was over one week ago, and they have, since then, replied to other people on Twitter and approved other comments on their post, but haven’t commented on this. So I have no idea why their literature review excludes these studies.

...Note: I have attempted to make some of those points in the comment section of SMTM’s last post about lithium, but they never approved my comment....

...I have attempted to point this out by making a comment on their post, but they have not approved the comment....

Comment by DPiepgrass on It’s Probably Not Lithium · 2022-06-29T13:08:14.313Z · LW · GW
Comment by DPiepgrass on Covid vaccine safety: how correct are these allegations? · 2022-05-03T17:46:49.199Z · LW · GW

Yeesh. No wonder Bret wasn't impressed - I've heard the first 45 minutes and they still haven't talked about any of the misinformation in Bret's podcast. Will they eventually get around to talking about it? Who knows, I can't be bothered to sit through the whole thing. At least at the 40 minute mark they implicitly discussed base rates in VAERS, which seems to be a totally invisible concept to anti-vaxxers.

But at the same time, they're saying "in the clinical trial no one died" and then talk about the "12 thousand VAERS deaths" without discussing the fact that anti-vaxxers dispute these basic facts. For example I see Kirsch saying, in a "Pfizer 6 month trial" that there were 21 deaths in the vaccine group vs. 17 in placebo (no doubt true but irrelevant); Kirsch also claims there is an enormous underreporting factor (42? I forget) for deaths post vaccination in VAERS (ridiculous, but he has an excuse for making the claim). At 47:20 there's finally a (forceful but weak) rebuttal of something Bret said.

So the podcast is engaging with anti-vax arguments a little, but the Dark Horse podcast I summarized here is 3 hours long and I can be pretty sure, without hearing the rest, that Sam hasn't addressed most of the claims made there, let alone everywhere else.

Comment by DPiepgrass on Increasing Demandingness in EA · 2022-05-01T04:47:49.688Z · LW · GW

You are given the option to convince one of them to start working on one of 80k's priority areas, but in doing so N others will get discouraged and stop donating. This is a bit of a false dilemma [...]. In 2012 I would have put a pretty low number for N, perhaps ~3

I find this paragraph very confusing.

Comment by DPiepgrass on Yudkowsky and Christiano discuss "Takeoff Speeds" · 2022-03-14T20:05:35.452Z · LW · GW

I suspect that indeed EY's model has a limited ability to make near-term predictions, so that yes, the situation is asymmetrical. But I suspect his view is similar to my view, so I don't think EY is wrong. But I am confused about why EY (i) hasn't replied himself and (ii) in general, doesn't communicate more clearly on this topic.

Comment by DPiepgrass on The Psychological Unity of Humankind · 2022-02-25T17:26:50.101Z · LW · GW

My thought for a minor caveat: there may be one or two complex mental adaptations that are not universal.

My idea here is that something changed to trigger the transition from agrarian society to industrial society. For example, maybe there's a new brain innovation related to science, engineering, language, math, logical thinking, or epistemology.

Or not. This change could also have been driven simply by cultural memes of science and invention, after the final piece of complexity was already fixed in the population, so that if a group of children from 10,000 years ago were transplanted into today's world, they would be just as inclined to become scientists, engineers or philosophers as modern children. On the other hand, it could also be that these children would be disproportionately inclined to become engineers rather than scientists, or vice versa. Or maybe the ancient children would have fewer independent thinkers. Or would have worse language or math talent.

I see no way of testing this hypothesis though.

Comment by DPiepgrass on Beware Trivial Fears · 2022-02-17T20:26:09.320Z · LW · GW

Yeah, this reminds me of what China is doing now. During a flight back to North America, I stopped on a layover in China, and they took my biometrics (fingerprints and such). Now I learn that "China Is Harvesting 'Masses' of Data on Western Targets from Social Media". And I already know that China is disappearing Chinese nationals abroad (paywall warning). And as long as I can remember, they have worked very hard to make sure that leaders of big corporations and countries never refer to Taiwan as a "country".

But wait, I've already criticized the Chinese government on Twitter. Should I be worried? Maybe. But maybe, creating enough fear to discourage criticism of China is precisely the goal. As long as the Chinese government is okay with some foreigners being a little afraid of China, their current policies seem like a good way to project Chinese propaganda goals abroad. Their behavior will make westerners less likely to criticize China, and in turn, any Chinese nationals who wander abroad will hear less criticism about China, which in turn will discourage Chinese themselves from considering unwanted opinions, thus helping the government maintain control. Indeed, a simple fear-based strategy might even improve average opinions on China, without any need to act as a genuine threat. Just collect those biometrics and let critics know "we're watching you"...

Comment by DPiepgrass on The Psychological Unity of Humankind · 2022-02-13T19:37:32.040Z · LW · GW

Upvote: polymorphism doesn't indicate an absence of complex genes in part of a species. Consider that a uterus is a complex adaptation, and that my male body does contain a set of genes for building a uterus. The genes may be switched off or repurposed in my body, but they still exist, and are presumably reactivated in my daughter (in combination with some genes from my wife).

Not sure why Tyler speaks as if a pseudo-three-sexed species offers new and different evidence we don't get from our two-sexed species.

P.S. don't females lack the Y chromosome though? My impression is that this is related to degradation of that chromosome, which makes it less important over the eons, so that maybe someday (if nature were to take its course) its only purpose will be to act as a signal of maleness that affects gene expression on other chomosomes.

Comment by DPiepgrass on Extreme Rationality: It's Not That Great · 2022-02-06T21:05:31.664Z · LW · GW

I find my personal experience in accord with the evidence from Vladimir's thread. I've gotten countless clarity-of-mind benefits from Overcoming Bias' x-rationality, but practical benefits? Aside from some peripheral disciplines1, I can't think of any.

Well, it did ultimately help you make SlateStarCodex and Astral Codex Ten successful, which provided a haven for non-extremist thought to thousands of people. And since the latter earned hundreds of thousands in annual revenue, you were able to create the ACX grants program that will probably make the world a better place in various small ways. Plus, people will look back on you as one of the world's most influential philosophers.

As for me, I hope to make various EA proposals informed by rationalist thought, especially relating to the cause area of "improving human intellectual efficiency", e.g. maybe sort of evidence-Wikipedia. Mind you, the seeds of this idea came before I discovered rationalism, and "Rationality A-Z" was, for me, just adding clarity to a worldview I already already developed vaguely in my head.

But yes, I'm finding that rationalism isn't much use unless you can spread it to others. My former legal guardian recently died with Covid, but my anti-vax father believes he was killed by a stroke that coincidentally happened at the same time as his Covid infection. Sending him a copy of Scout Mindset was a notably ineffective tactic; in fact, it turned out that one month before I sent him the book, the host and founder of his favorite source of anti-vax information, Daystar, died of Covid. There's conviction—and then there's my dad. My skill at persuading this kind of person is virtually zero even though I have lots of experience attempting it, and I think the reason is that I have never been the kind of person that these people are, so my sense of empathy fails me, and I do not know how to model them. Evidence has no effect, and raising them into a meta-level of discussion is extremely hard at best. Winnifred Louis suggests (among other things) that people need to hear messages from their own tribe, and obviously "family" is not the relevant tribe in this case! So one of the first things I sent him was pro-vax messaging from the big man himself, Donald Trump... I'm not sure he even read that email, though (he has trouble with modern technology, hence the paper copy of Scout Mindset).

Anyway, while human brain plasticity isn't what we might like it to be, new generations are being born all the time, and I think on the whole, you and this community have been successful at spreading rationalist philosophy, and it is starting to become clear that this is having an effect on the broader world, particularly on the EA side of things. This makes sense! LessWrong is focused on epistemic rationality and not so much on instrumental rationality, while the EA community is focused on action; drawing accurate maps of reality isn't useful until somebody does something with those maps. And while the EA community is not focused on epistemic rationality, many of its leaders are familiar with the most common ideas from LessWorng, and so rationalism is indeed making its mark on the world.

I think a key problem with early rationalist thought is a lack of regard for coordination and communities. Single humans are small, slow, and intellectually limited, so there is little that a single human can do with rationalism all by zimself. Yudkowsky envisioned "rationality dojos" where individuals would individually strengthen their rationality—which is okay I guess—but he didn't present a vision of how to solve coordination problems, or how to construct large new communities and systems guided in their design by nuanced rational thought. Are we starting to look at such things more seriously these days? I like to think so.