Dragon Agnosticism

post by jefftk (jkaufman) · 2024-08-01T17:00:06.434Z · LW · GW · 75 comments

Contents

75 comments

I'm agnostic on the existence of dragons. I don't usually talk about this, because people might misinterpret me as actually being a covert dragon-believer, but I wanted to give some background for why I disagree with calls for people to publicly assert the non-existence of dragons.

Before I do that, though, it's clear that horrible acts have been committed in the name of dragons. Many dragon-believers publicly or privately endorse this reprehensible history. Regardless of whether dragons do in fact exist, repercussions continue to have serious and unfair downstream effects on our society.

Given that history, the easy thing to do would be to loudly and publicly assert that dragons don't exist. But while a world in which dragons don't exist would be preferable, that a claim has inconvenient or harmful consequences isn't evidence of its truth or falsehood.

Another option would be to look into whether dragons exist and make up my mind; people on both sides are happy to show me evidence. If after weighing the evidence I were convinced they didn't exist, that would be excellent news about the world. It would also be something I could proudly write about: I checked, you don't need to keep worrying about dragons.

But if I decided to look into it I might instead find myself convinced that dragons do exist. In addition to this being bad news about the world, I would be in an awkward position personally. If I wrote up what I found I would be in some highly unsavory company. Instead of being known as someone who writes about a range of things of varying levels of seriousness and applicability, I would quickly become primarily known as one of those dragon advocates. Given the taboos around dragon-belief, I could face strong professional and social consequences.

One option would be to look into it, and only let people know what I found if I were convinced dragons didn't exist. Unfortunately, this combines very poorly with collaborative truth-seeking. Imagine a hundred well-intentioned people look into whether there are dragons. They look in different places and make different errors. There are a lot of things that could be confused for dragons, or things dragons could be confused for, so this is a noisy process. Unless the evidence is overwhelming in one direction or another, some will come to believe that there are dragons, while others will believe that there are not.

While humanity is not perfect at uncovering the truth in confusing situations, our strategy that best approaches the truth is for people to report back what they've found, and have open discussion of the evidence. Perhaps some evidence Pat finds is very convincing to them, but then Sam shows how they've been misinterpreting it. But this all falls apart when the thoughtful people who find one outcome generally stay quiet. I really don't want to contribute to this pattern that makes it hard to learn what's actually true, so in general I don't want whether I share what I've learned to be downstream from what I learn.

Overall, then, I've decided to remain agnostic on the existence of dragons. I would reconsider if it seemed to be a sufficiently important question, in which case I might be willing to run the risk of turning into a dragon-believer and letting the dragon question take over my life: I'm still open to arguments that whether dragons exist is actually highly consequential. But with my current understanding of the costs and benefits on this question I will continue not engaging, publicly or privately, with evidence or arguments on whether there are dragons.

Note: This post is not actually about dragons, but instead about how I think about a wide range of taboo topics.

Comment via: facebook, mastodon

75 comments

Comments sorted by top scores.

comment by Richard_Ngo (ricraz) · 2024-08-01T19:53:52.761Z · LW(p) · GW(p)

I think this used to be a tenable position a decade or two ago. But I think it's no longer tenable, due to the dynamic described in this tweet:

Suppose an ideology says you're not allowed to question idea X. At first X might not be very important. But now when people want to argue for Y, "X->Y" and "~Y->~X" are both publicly irrefutable. So over time X will become more and more load-bearing for censorious ideologies.

We can also think of this as a variant of Goodhart's law, which I'll call ideological Goodhart (and have just tweeted about here): any false belief that cannot be questioned by adherents of an ideology will become increasingly central to that ideology. As this process plays out, advocates of that ideology will adopt increasingly extreme positions, and support increasingly crazy policies.

Replies from: habryka4, Lanrian, Jiro, MichaelDickens, flowerfeatherfocus
comment by habryka (habryka4) · 2024-08-02T01:57:19.497Z · LW(p) · GW(p)

(Disagree that it was a tenable position a decade or two ago, agree that it is an untenable position now)

Replies from: ricraz, flowerfeatherfocus
comment by Richard_Ngo (ricraz) · 2024-08-02T07:24:04.609Z · LW(p) · GW(p)

Fair, this isn't a confident claim from me. I do have a sense that the last decade has been particularly bad in terms of blatant preference falsification, but it's hard to distinguish "the world was different before then" from "I was younger and didn't have a great sense of what was going on".

Replies from: carl-feynman
comment by Carl Feynman (carl-feynman) · 2024-08-02T14:39:32.299Z · LW(p) · GW(p)

I’m 62, so I was adult in the ‘80s and ‘90s.  My sense is that the world was different.  The consequences of expressing divergent opinion seem much more serious now.

comment by flowerfeatherfocus · 2024-08-31T00:12:22.376Z · LW(p) · GW(p)

Can you say what position you recommend instead? Is it just opining publicly about everything, with no regard to how taboo it is?

comment by Lukas Finnveden (Lanrian) · 2024-08-02T02:35:44.313Z · LW(p) · GW(p)

Tbc: It should be fine to argue against those implications, right? It’s just that, if you grant the implication, then you can’t publicly refute Y.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2024-08-02T07:34:47.303Z · LW(p) · GW(p)

Unfortunately the way that taboos work is by surrounding the whole topic in an aversive miasma. If you could carefully debate the implications of X, then that would provide an avenue for disproving X, which would be unacceptable. So instead this process tends to look more like "if you don't believe Y then you're probably the sort of terrible person who believes ~X", and now you're tarred with the connotation even if you try to carefully explain why you actually have different reasons for not believing Y (which is what you'd likely say either way).

Replies from: flowerfeatherfocus
comment by flowerfeatherfocus · 2024-08-22T22:50:10.146Z · LW(p) · GW(p)

I expect this effect to be weaker than you're suggesting, especially if Y is something you in fact independently care about, and not an otherwise unimportant proximal detail that could reasonably be interpreted as a "just asking questions" means of arguing for ~X. I'm struggling to think of a particularly illustrative X and Y, but consider X="COVID was not a lab leak", which seemed lightly taboo to disagree with in 2020.  Here's a pair of tweets you could have sent in 2020:
1. "I think COVID was probably a lab leak."
2. "I don't know whether COVID was a lab leak. (In fact for now I'm intentionally not looking into it, because it doesn't seem important enough to outweigh the risk of arriving at taboo beliefs [LW · GW].) But gain-of-function research in general is unacceptably risky, in a way that makes global pandemic lab leaks a very real possibility, and we should have much stronger regulations to prevent that."

I expect the second one would receive notably less push back, even though it defends Y="gain of function research is unacceptably risky", and suggests that Y provides evidence for ~X.

comment by Jiro · 2024-08-03T19:46:31.420Z · LW(p) · GW(p)

I don't see why it should be limited to false beliefs.

Note that even if X is true, X->Y need not be true, and it can still be harmful to not be able to question X->Y.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2024-08-04T04:41:31.638Z · LW(p) · GW(p)

This is a good point. Though the thing about true beliefs is that there is a specific version of them that's true, which you're allowed to defend (if you can find it). And so you can more easily figure out what the implications are.

Whereas for false beliefs you can't get into the specifics, because looking hard enough at the specifics will tend to disprove the belief.

comment by MichaelDickens · 2024-08-02T17:16:11.846Z · LW(p) · GW(p)

Suppose an ideology says you're not allowed to question idea X.

I think there are two different kinds of "not questioning": there's unquestioningly accepting an idea as true, and there's refusing to question and remaining agnostic. The latter position is reasonable in the sense that if you refuse to investigate an issue, you shouldn't have any strong beliefs about it. And I think the load-bearingness is only a major issue if you refuse to question X while also accepting that X is true.

comment by flowerfeatherfocus · 2024-08-22T23:17:45.191Z · LW(p) · GW(p)

Is there another strategy you prefer? Afaict the options are 

1) Have public taboo beliefs.

2) Have private beliefs that you lie about. 

3) Remain deliberately agnostic about taboo but insufficiently important topics.

4) Get forever lucky, such that every taboo topic you investigate results in you honestly arriving at an allowed belief.

Whether 1) is at all compatible with having other career goals is a fact of the territory, and I expect in the US in 2024, there are topics where having taboo beliefs could totally end your career, for many values of career. (Leaving open whether there are such beliefs that are true, but, per the topic of this post, that's not something you can learn without taking risks.)

2) seems even more prone to the effect you describe than 3).

My guess is you're making a bid for 1), but I feel like a case for that should take into account the costs of believing X weighed against the costs of agnosticism about X, rather than a sweeping heuristic argument. (Where maybe the cost of agnosticism about X includes adjacent topics Y you'll either have to include in your agnosticism or otherwise eat the cost of ~X connotations, though I'm skeptical about how often this will come up, and per my comment here [LW(p) · GW(p)] I expect the ~Y->~X taboos will often be much smaller than the ~X taboo.)

comment by AnnaSalamon · 2024-11-17T22:23:48.073Z · LW(p) · GW(p)

I don't see advantage to remaining agnostic, compared to:

1) Acquire all the private truth one can.

Plus:

2) Tell all the public truth one is willing to incur the costs of, with priority for telling public truths about what one would and wouldn't share (e.g. prioritizing to not pose as more truth-telling than one is).

--

The reason I prefer this policy to the OP's "don't seek truth on low-import highly-politicized matters" is that I fear not-seeking-truth begets bad habits.  Also I fear I may misunderstand how important things are if I allow politics to influence which topics-that-interest-my-brain I do/don't pursue, compared to my current policy of having some attentional budget for "anything that interests me, whether or not it seems useful/virtuous."

Replies from: andrei-alexandru-parfeni, Benito
comment by sunwillrise (andrei-alexandru-parfeni) · 2024-11-17T22:53:54.820Z · LW(p) · GW(p)

One of the advantages to remaining agnostic comes from the same argument that users put forth in the comment sections on this very site way back in the age of the Sequences (I can look up the specific links if people really want me to, they were in response to the Doublethink Sequence) for why it's not necessarily instrumentally rational for limited beings like humans to actually believe in the Litany of Tarski [? · GW]: if you are in a precarious social situation, in which retaining status/support/friends/resources is contingent on you successfully signaling to your in-group that you maintain faith in their core teachings, it simply doesn't suffice to say "acquire all the private truth through regular means and don't talk/signal publicly the stuff that would be most dangerous to you," because you don't get complete control over what you signal

If you learn that the in-group is wrong about some critical matter, and you understand that in-group members realizing you no longer agree with them will result in harm to you (directly, or through your resources being cut off), your only option is, to act (to some extent) deceptively. To take on the role, QuirrellMort-style, of somebody who does not have access to the information you have actually stumbled upon, and to pretend to be just another happy & clueless member of the community.

This is capital-H Hard. Lying (or even something smaller-scale like lesser deceptions), when done consistently and routinely, to people that you consider(ed) your family/friends/acquaintances, is very hard for (the vast majority of) people. For straightforward evolutionary reasons, we have evolved to be really good at detecting when one of our own is not being fully forthcoming. You can bypass this obstacle if the number of interactions you have is small, or if, as is usually the case in modern life when people get away with lies, nobody actually cares about the lie and it's all just a game of make-believe where you just have to "utter the magic words." [LW · GW] But when it's not a game, when people do care about honestly signaling your continued adherence to the group's beliefs and epistemology, you're in big trouble.

Indeed, by far the most efficient way of convincing others of your bullshit on a regular basis is to convince yourself first, and by putting yourself in a position where you must do the former, you are increasing the likelihood of the latter with every passing day. Quite the opposite of what you'd like to see happen, if you are about truth-seeking to any large extent.

(addendum: admittedly, this doesn't answer the question fully, since it doesn't deal with the critical distinction between agnosticism and explicit advocacy, but I think it does get at something reasonably important in the vicinity of it anyway)

Replies from: AnnaSalamon
comment by AnnaSalamon · 2024-11-18T00:47:13.245Z · LW(p) · GW(p)

Fair point; I was assuming you had the capacity to lie/omit/deceive, and you're right that we often don't, at least not fully.

I still prefer my policy to the OPs, but I accept your argument that mine isn't a simple Pareto improvement.

Still:

  • I really don't like letting social forces put "don't think about X" flinches into my or my friends' heads; and the OPs policy seems to me like an instance of that;
  • Much less importantly: as an intelligent/self-reflective adult, you may be better at hiding info if you know what you're hiding, compared to if you have guesses you're not letting yourself see, that your friends might still notice.  (The "don't look into dragons" path often still involves hiding info, since often your brain takes a guess anyhow, and that's part of how you know not to look into this one.  If you acknowledge the whole situation, you can manage your relationships consciously, including taking conscious steps to buy openness-offsets, stay freely and transparently friends where you can scheme out how.)
Replies from: jkaufman
comment by jefftk (jkaufman) · 2024-11-18T01:54:43.338Z · LW(p) · GW(p)

The "don't look into dragons" path often still involves hiding info, since often your brain takes a guess anyhow

In many cases I have guesses, but because I just have vague impressions they're all very speculative. This is consistent with being able to say "I haven't looked into it" and "I really don't know", and because these are all areas where the truth is not decision relevant it's been easy to leave it at that. Perhaps people notice I have doubts, but at least in my social circles that's acceptable if not made explicit.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2024-11-18T02:06:03.540Z · LW(p) · GW(p)

Does it feel to you as though your epistemic habits / self-trust / intellectual freedom and autonomy / self-honesty takes a hit here?

Replies from: jkaufman
comment by jefftk (jkaufman) · 2024-11-18T02:11:49.314Z · LW(p) · GW(p)

I think it's a pretty weak hit, though not zero. There are so many things I want to look into that I don't have time for that having this as another factor in my prioritization doesn't feel very limiting to my intellectual freedom.

I do think it is good to have a range of people in society who are taking a range of approaches, though!

comment by Ben Pace (Benito) · 2024-11-17T22:46:19.965Z · LW(p) · GW(p)

Also, a norm of "allowing people to keep their beliefs private on subjects they feel a lot of pressure on" gives space for people to gather information personally without needing to worry about the pressures on them from their society.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2024-11-18T01:47:41.496Z · LW(p) · GW(p)

I think that's an important norm and I support it, but until it is well established it's not something I (or others) can rely on.

Replies from: Benito
comment by Ben Pace (Benito) · 2024-11-18T02:03:51.992Z · LW(p) · GW(p)

It’s going pretty well for me! Most people I work with or am friends with know that there are multiple topics on which my thoughts are private, and there have been ~no significant social costs to me that I’m aware of.

I would like to be informed of opportunities to support others in this on LessWrong or in the social circles I participate in, to back you up if people are applying pressure on you to express your thoughts on a topic that you don’t want to talk about.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2024-11-18T02:09:07.103Z · LW(p) · GW(p)

Nice of you to offer! I expect, however, that pressure in this direction will come from non-LW non-EA directions.

Replies from: Benito
comment by TsviBT · 2024-08-01T18:21:48.104Z · LW(p) · GW(p)

One option would be to look into it

Another option you didn't list is to look into why horrible acts have been committed in the name of dragons, and share the results of that.

Replies from: MakoYass
comment by mako yass (MakoYass) · 2024-08-02T03:11:27.713Z · LW(p) · GW(p)

Mm, perhaps it will turn out that there are other, more benign ways of dealing with dragons. Narrative-brained people wont find those.

comment by quila · 2024-08-02T13:04:14.537Z · LW(p) · GW(p)

I really don't want to contribute to this pattern that makes it hard to learn what's actually true, so in general I don't want whether I share what I've learned to be downstream from what I learn.

Another policy which achieves this is to research the question, and not (publicly) share your conclusion either way. This also benefits you in the case you become a dragon believer, because glomarizing (when you're known to follow this policy) provides no evidence you are one in that case. (Things this reminds me of: meta-honesty [? · GW], and updatelessness when compared to your position)

Another policy which achieves this is to share your conclusion only if you end up disbelieving in dragons, but also hedge it that you wouldn't be writing if you believed the other position. If you're known to follow this policy and glomarize about whether you believe in dragons, it is evidence that either you do or you haven't researched the question.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-08-02T15:04:06.296Z · LW(p) · GW(p)

The trouble with this is that it's a socially awkward move to even imply you might research taboo topics. Better to leave public personna and tricky glomarization out of it, I think. Just publish your research and results anonomously. That seems to me to lead to a better epistemic state for society, since a standard of anonymous publication doesn't leave a misleading bias in publicly available research.

Replies from: AspiringRationalist
comment by NoSignalNoNoise (AspiringRationalist) · 2024-08-02T23:04:36.274Z · LW(p) · GW(p)

To pick an uncontroversial example, imagine someone glomerizing in whether the Earth was flat or (approximately) spherical. That would signal that you're the sort of person who considered a spherical Earth to be a plausible hypothesis, which is almost as bad as actually believing it. All reasonable, right-thinking people, on the other hand, know that it's obviously flat and wouldn't even consider such nonsense.

Replies from: quila
comment by quila · 2024-08-03T13:00:04.161Z · LW(p) · GW(p)

This is why it's important for the policy be known for the glomarization to be evidence under that policy specifically, which might include something to the effect of "I follow this even in obvious cases so I'm free to also follow it in cases which are mistakenly framed as obvious".

That said, I'm not thinking about the 'mundane' world as Eliezer calls it, where doing this at all would be weird. I guess I'm thinking about the lesswrong blogosphere.

(There's a hypothetical spectrum from [having a glomarization policy at all is considered weird and socially-bad] to [it is not seen negatively, but you're not disincentivized from sharing non-exfohazardous beliefs, to begin with])

comment by ShardPhoenix · 2024-08-05T00:27:35.284Z · LW(p) · GW(p)

This seems mostly fine for anyone who doesn't engage in political advocacy or activism, but a mild-moderate form of defection against society if you do - because if dragons are real, society should probably do something about that, even if you personally can't.

edit: I guess dragon-agnosticism is tolerable if you avoid advocating for (and ideally voting for) policies that would be disastrous if dragons do in fact exist.

comment by Bucky · 2024-08-03T09:55:42.756Z · LW(p) · GW(p)

Do dragon unbelievers accept this stance? My impression is that dragon agnosticism would often be considered almost as bad as dragon belief.

Replies from: xpym, jkaufman
comment by xpym · 2024-08-03T10:13:23.356Z · LW(p) · GW(p)

They don't, of course, but if you're lucky enough not to be located among the more zealous of them and be subjected to mandatory struggle sessions, their wrath will generally be pointed at more conspicuous targets. For now, at least.

comment by jefftk (jkaufman) · 2024-11-18T01:38:26.143Z · LW(p) · GW(p)

No one has given me a hard time about it. I say things like "I haven't looked into it" and we move on. The next time it happens I will additionally be able to link to this post.

comment by faul_sname · 2024-08-01T21:47:45.353Z · LW(p) · GW(p)

It's fun to put various of my foundational non-hot-button-political beliefs in place of "dragons" and see which ones make my mind try to flinch away from thinking that thought for the reasons outlined in this post (i.e. "if I checked and it's not the way I thought, that would necessitate a lot of expensive and time-consuming updates to what I'm doing").

Replies from: jkaufman
comment by jefftk (jkaufman) · 2024-08-01T22:12:59.459Z · LW(p) · GW(p)

the reasons outlined in this post (i.e. "if I checked and it's not the way I thought, that would necessitate a lot of expensive and time-consuming updates to what I'm doing").

Wait, that's not what I'm trying to communicate in the post. If learning that dragons existed would precipitate major updates, it will very often be worth investigating their existence. Instead, it is on highly-political low-payoff topics that I am intentionally agnostic.

Replies from: faul_sname
comment by faul_sname · 2024-08-02T00:07:01.730Z · LW(p) · GW(p)

Concrete example: one of my core beliefs is " :::systems which get their productive capacity mainly from voluntary trade are both more productive and better for the people living under them than systems which get their productive capacity mainly from threats and coercion::: ". In this example, a "dragon" would be :::a coercion-based system which is more productive than a voluntary-trade-based one::: .

So working through the analogy:

  • Historically, people who believed in this kind of "dragon", and who acted on that belief, tended not to behave very nicely.
  • There are still people who believe in "dragons". They tend to valorize the actions of past dragon believers to a worrying extent.
  • I think "dragons" probably don't exist, but I haven't actually proven it.
  • I would prefer to live in a world where "dragons" don't exist.
  • If "dragons" did exist, I would feel obligated to spend a lot of time and effort reevaluating my world model and plans.
  • If I discovered the existence of a "dragon", and shared that, that would become a defining thing I am known for
  • Most likely, there wouldn't be much that I, personally, could do with the information that a "dragon" exists

I guess arguably this is a hot-button political issue in some contexts. The other example I was thinking of was less political had the similar shape of "thing which, if true implies that we can'texpect to maintain certain nice things about the world we live in, and where people believing that the thing is true, if it is in fact true, would hasten the end of the nice things, to the benefit of nobody in particular".

ETA: On reflection I do agree that this is a different flavor of "nothing good can come of this line of thought" than the one you outlined in your post.

comment by lc · 2024-11-20T10:21:01.224Z · LW(p) · GW(p)

I think the entire point of rationalism is that you don't do things like this.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2024-11-21T03:22:25.166Z · LW(p) · GW(p)

Say more?

Replies from: lc
comment by lc · 2024-11-21T03:53:42.724Z · LW(p) · GW(p)

So one of the themes of sequences is that deliberate self-deception or thought censorship - deciding to prevent yourself from "knowing" or learning things you would otherwise learn - is almost always irrational. Reality is what it is, regardless of your state of mind, and at the end of the day whatever action you're deciding to take - for example, not talking about dragons - you could also be doing if you knew the truth. So when you say:

But if I decided to look into it I might instead find myself convinced that dragons do exist. In addition to this being bad news about the world, I would be in an awkward position personally. If I wrote up what I found I would be in some highly unsavory company. Instead of being known as someone who writes about a range of things of varying levels of seriousness and applicability, I would quickly become primarily known as one of those dragon advocates. Given the taboos around dragon-belief, I could face strong professional and social consequences.

It's not a reason not to investigate. You could continue to avoid these consequences you speak of by not writing about Dragons regardless of the results of your investigation. One possibility is that what you're also avoiding, is guilt/discomfort that might come from knowing the truth and remaining silent. But through your decision not to investigate, the world is going to carry the burden of that silence either way.

Another theme of the sequences is that self-deception, deliberate agnosticism, and motivated reasoning are a source of surprising amounts of human suffering. Richard explains one way it goes horribly wrong here [LW · GW]. Whatever subject you're talking about, I'm sure there a lot of other people in your position who have chosen not to look into it for the same reasons. But if all of those people had looked into it, and faced whatever conclusion that resulted squarely, you yourself might not be in the position of having to face a harmful taboo in the first place. So the form of information hiding you endorse in the post is self-perpetuating, and is part of what helps keep the taboo strong.

comment by Orborde · 2024-08-02T18:29:44.967Z · LW(p) · GW(p)

It already seems like we can infer that dragon-existence has, to you, nontrivial subjective likelihood because you don't loudly proclaim "dragons don't exist" and because you regard investigation as uncomfortably likely to turn you into a believer of something socially unacceptable.

If you think it's in fact, like, 20% likely (a reasonable "nontrivial likelihood" guess for people to make), seems like the angry dragons-don't-exist people should be 20% angry at you.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2024-08-02T18:44:58.113Z · LW(p) · GW(p)

I don't think inferring a probability anywhere near as high as 20% is justified. If, conditional on finding dragons the value of the knowledge is lower than the harms of being known as a dragon believer, then you shouldn't go check no matter how low your prior.

comment by mike_hawke · 2024-08-04T20:20:56.813Z · LW(p) · GW(p)

I am agnostic about various dragons. Sometimes I find myself wondering how I would express my dragon agnosticism in a world where belief in dragons was prevalent and high status. I am often disturbed by the result of this exercise. It turns out that what feels like agnosticism is often sneakily biased in favor of what will make me sound better or let me avoid arguments.

This effect is strong enough and frequent enough that I don't think the agnosticism described by this post is a safe epistemic fallback for me. However, it might still be my best option in situations where I want to look good or avoid arguments.


Possibly related: 

Selective Reporting and the Tragedy of the Green Rationalists [LW · GW] by Zack M Davis

Kolmogorov Complicity and the Parable of Lightning by Scott Alexander

comment by cousin_it · 2024-08-02T16:14:31.039Z · LW(p) · GW(p)

I think even one dragon would have a noticeable effect on the population of large animals in the area. The huge flying thing just has to eat so much every day, it's not even fun to imagine being one. If we invent shapeshifting, my preferred shape would be some medium-sized bird that can both fly and dive in the water, so that it can travel and live off the land with minimal impact. Though if we do get such technology, we'd probably have to invent territorial expansion as well, something like creating many alternate Earths where the new creatures could live.

comment by kromem · 2024-08-02T06:27:58.980Z · LW(p) · GW(p)

I think you'll find that no matter what you find out in your personal investigation of the existence of dragons, that you need not be overly concerned with what others might think about the details of your results.

Because what you'll invariably discover is that the people that think there are dragons will certainly disagree with the specifics about dragons you found out that disagrees with what they think dragons should be, and the people that think there aren't dragons will generally refuse to even seriously entertain whatever your findings are relating to dragons, and the vast majority of people who aren't sure about the existence of dragons will dismiss the very idea of spending time thinking about the existence of dragons, reasoning that the existence or non-existence bears little influence on their lives (otherwise they likely would have investigated the issue and landed in a respective camp).

So investigate dragons all you like, and shout it from the rooftops if you please. The void will hear you and appreciate it as much as the void can, while everyone else is much more concerned with their own feelings about dragons than whatever your thinking or reasoning on the subject might offer.

The only real tragedy is that if you come away thinking there might be dragons, but the dragons you find are very different from the dragons people expect dragon-believing people to believe in - well that's somehow the specific niche where both the dragon believers and non-believers find rare common ground to roll their eyes and think you're nuts.

So maybe do your rooftop shouting to the sole listening void anonymously?

Replies from: jiao-bu
comment by Jiao Bu (jiao-bu) · 2024-08-04T16:16:42.256Z · LW(p) · GW(p)

It's also possible that an opposing effect happens where your shouting into the void about dragons connects in some vague way with my belief in the Ilithids, which I then end up coopting your dragon evidence into my own agenda.  Especially if you find anything close to material evidence.  Heck, your material evidence for dragons now gives all kinds of creedance to Ilithids, beholders, gnomes, and all sorts.  So the gnome people and everyone else is now coming out of the woodwork to amplify your work on dragons.  And I think this would be regardless of the specific nuances your attribute to dragons.  I would expect those nuances to get smooshed in the fray to cite your once-and-for-all-proving-dragons strategy.

I mean, if Pons and Fleischmann was true, for example, I bet it would get trotted out with all kinds of half-baked theories on free energy, along with Tesla's name.  And the reason I'm making this bet is because these already do get trotted out into such discussions.  

(Not that I would ever read those reports or have done any such research into repressed Pons and Fleischmann evidence or Ilithid conspiracies)

Replies from: kromem
comment by kromem · 2024-08-05T05:33:51.182Z · LW(p) · GW(p)

Honestly that sounds a bit like a good thing to me?

I've spent a lot of time looking into the Epicureans being right about so much thousands of years before those ideas resurfaced again despite not having the scientific method, and their success really boiled down to the analytical approach of being very conservative in dismissing false negatives or embracing false positives - a technique that I think is very relevant to any topics where experimental certainty is evasive.

If there is a compelling case for dragons, maybe we should also be applying it to gnomes and unicorns and everything else we can to see where it might actually end up sticking.

The belief that we already have the answers is one of the most damaging to actually uncovering them when we in fact do not.

comment by Dagon · 2024-08-02T00:01:08.669Z · LW(p) · GW(p)

What does "agnostic" mean, operationally?  I have trouble thinking you mean it in the direct sense (unknowable and not subject to testing), but maybe I'm wrong.  For myself, I'm not agnostic, I'm an unbeliever - I have a reasonably confident low estimate of the probability that dragons exist, in the common conceptions of dragons and existence.

That said, I don't spend a lot of time thinking or discussing the topic, and I am perfectly happy to nod and ignore people who think it's important (in either direction).  My private beliefs are somewhat decoupled from my public advocacy.  For many of the more rabid pro-dragon proselytizers, it's easier to get them out of my way if I say I'm agnostic, but that doesn't make it so, and I generally don't have to do that on LessWrong.

Replies from: jkaufman, frank-bellamy, Avnix
comment by jefftk (jkaufman) · 2024-08-02T01:22:44.259Z · LW(p) · GW(p)

Operationally it means that I'm not trying to find out the truth one way or the other. If I come across arguments I ignore them, if someone asks if they can explain it to me I say no, I try not to think about it, etc.

comment by River (frank-bellamy) · 2024-08-02T04:56:42.238Z · LW(p) · GW(p)

"Agnostic" doesn't necessarily mean "unknowable and not subject to testing". Much more often it has the weaker meaning "not currently known". There is a house being built across the street. Is there a work van parked in front of it right now? I don't know. This is certainly knowable and subject to testing - I could get up, walk over to a window in the front of the house, and look. But I don't care enough to do that, so I continue to now know if there is a work van parked in front of the house across the street. I am agnostic about the existence of such a work van.

comment by Sweetgum (Avnix) · 2024-08-02T04:43:05.561Z · LW(p) · GW(p)

Hmm, it seems like you might be treating this post as an allegory for religion because of the word "agnostic", but I'm almost certain that it's not. I think it's about "race science"/"human biodiversity"/etc., i.e. the claim "[ethnicity] are genetically predisposed to [negative psychological trait]".

Before I do that, though, it's clear that horrible acts have been committed in the name of dragons. Many dragon-believers publicly or privately endorse this reprehensible history. Regardless of whether dragons do in fact exist, repercussions continue to have serious and unfair downstream effects on our society.

While this could work as a statement about religious people, it seems a lot more true for modern racists than modern religious people.

Given that history, the easy thing to do would be to loudly and publicly assert that dragons don't exist. But while a world in which dragons don't exist would be preferable, that a claim has inconvenient or harmful consequences isn't evidence of its truth or falsehood.

This is the type of thing I often see LessWrongers say about race science.

But if I decided to look into it I might instead find myself convinced that dragons do exist. In addition to this being bad news about the world, I would be in an awkward position personally. If I wrote up what I found I would be in some highly unsavory company. Instead of being known as someone who writes about a range of things of varying levels of seriousness and applicability, I would quickly become primarily known as one of those dragon advocates. Given the taboos around dragon-belief, I could face strong professional and social consequences.

Religious belief is not nearly as taboo as what this paragraph describes, but the claim "[ethnicity] are genetically predisposed to [negative psychological trait]" is.

Replies from: followthesilence
comment by followthesilence · 2024-08-02T06:32:11.176Z · LW(p) · GW(p)

May be a Rorschach... For me, of the dozen or so things i thought about replacing dragons with, "race science" wasn't one of them

Replies from: Avnix
comment by Sweetgum (Avnix) · 2024-08-02T09:11:43.396Z · LW(p) · GW(p)

What did you think of?

Even if it wasn't meant to be an allegory for race science, I'm pretty sure it was meant to be an allegory for similarly-taboo topics rather than religion. Religious belief just isn't that taboo.

comment by Shankar Sivarajan (shankar-sivarajan) · 2024-08-01T21:33:14.470Z · LW(p) · GW(p)

As long as you don't also claim to be "truth-seeking" in any way, this form of intellectual cravenness is probably better than what most people do, which is just to adopt whatever belief is most convenient given their social circles.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2024-08-01T22:11:43.859Z · LW(p) · GW(p)

A general commitment to seeking truth doesn't obligate one to investigate every possible question! I can be quite committed to seeking truth in some areas, while intentionally avoiding quite unrelated ones.

One certainly shouldn't claim to be truth seeking in areas where one is are intentionally agnostic, but that's part of why I'm writing this post: so I can later link it to explain why I have chosen not to engage with some question in some area.

Replies from: adele-lopez-1
comment by Adele Lopez (adele-lopez-1) · 2024-08-02T06:18:22.452Z · LW(p) · GW(p)

Still, it feels like there's an important difference between "happening to not look" and "averting your eyes".

Replies from: jkaufman, jiao-bu
comment by jefftk (jkaufman) · 2024-08-02T14:08:20.838Z · LW(p) · GW(p)

I agree, but I strongly disagree with @Shankar Sivarajan [LW · GW] that if a person does this in some areas then they shouldn't "claim to be 'truth-seeking' in any way".

comment by Jiao Bu (jiao-bu) · 2024-08-04T16:35:36.801Z · LW(p) · GW(p)

This is probably true in an internal sense, where one needs to be self-honest.  It might be very difficult to understand when any conscious person other than you was doing this, and it might be dicey to judge even in yourself.  Especially given the finiteness of human attention.  

In my personal life, I have spent recent months studying.  Did I emotionally turn away from some things in the middle of this, so that to an outside observer I might have looked like I was burying my head or averting my eyes?  Sure.  Was I doing that or was I setting boundaries?  I guess even if you lived in my head at that time, it could be hard to know.  Maybe my obsessive studying itself is an avoidance.  In the end, I know what I intended, but that's about it.  That's often all we get, even from the inside.

So while I agree with you, I'm not sure exactly when we should cease to be agnostic about parsing that difference.  Maybe it's something we can only hold it as an ideal, complimentary to striving for Truth, basically?

comment by Orborde · 2024-08-02T07:24:44.899Z · LW(p) · GW(p)

You could look into whether dragons exist with the plan that you will never reveal any findings no matter what they are. I get that you probably wouldn't bother because most paths by which that information could be valuable require you to leak it, but it's an option.

comment by Michael Roe (michael-roe) · 2024-08-02T15:20:21.209Z · LW(p) · GW(p)

Some, at least, of these highly politically partisan hot-button issues have the property that most people don;t have a reason for caring whether they're true or not. In which cases, shrug might be the reasonable response.

 

Possibly the idea of this thread is that we're not supposed to mention any real examples, go avoid gettin g caught up in culture wars.

 

I can think of examples where even if I am going to do 9or not do) something based on whether the claim is trur of not... (a) the risk of doing X if claim is true seems small'(b) the cost of doing X if the claim is false is small (c) evidence for or against the claim looks really weak. So, shrug.

 

e.g. (true story) I am in the emergency room with tachycardia caused by graves disease. One of the ER docs and the endocrinologist have a really deeply technical argument over whether administration of Hartmann;s solutipn via IV is a plus or minus in my situation. Listening go this, I gather that, really, there is not much in it. Nurse would like to know if she can go ahead and stick the IV drip in my arm. Shrug. Whatever. You have my patient consent for that procedure, yes.

Replies from: michael-roe
comment by Michael Roe (michael-roe) · 2024-08-02T15:34:24.353Z · LW(p) · GW(p)

Example chosen because (a) I have really, absolutely no idea what the optimal action is here; (b) I have reason to belive that the risk is kind of low, anyway. (c) most commentors here wont have a strong opinion, so we wont have a flame war over what the right answer might be. Let it stand in for other cases where it is very, very unclear what the optimal action is.

Replies from: michael-roe
comment by Michael Roe (michael-roe) · 2024-08-02T16:00:50.680Z · LW(p) · GW(p)

This analogy might not work for all the things "dragons" is standing in for in this thread ... but if I have a good statistical bound on the risk posed by dragons being low (but cannot, strictly speaking, rule out their existence entirely) I may conclude that a residual 1E-5 chance of running in to one to be a acceptable risk.

Replies from: michael-roe
comment by Michael Roe (michael-roe) · 2024-08-02T16:09:39.983Z · LW(p) · GW(p)

So if I see verified reports of AI causing a mass casualty incident with more that %500 million in damage (or whatever the threshold in the California bill is), I shall consider that evidence on a par to seeing Lake-Town get toasted by Smaug, and update accordingly.

comment by mako yass (MakoYass) · 2024-08-02T01:12:38.927Z · LW(p) · GW(p)

The solution to dragons, if they exist, is to shut up about it and solve the alignment problem.

This can be said about most of the things people like to argue about.

Replies from: jkaufman, Liriodendron
comment by jefftk (jkaufman) · 2024-08-02T01:26:05.689Z · LW(p) · GW(p)

I don't think refocusing my main efforts on the alignment problem would make humanity safer.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2024-08-02T11:19:23.706Z · LW(p) · GW(p)

For people disagree-voting [edit: at the time the parent was disagree-voted to -7], I'd be happy to see arguments that I should switch from trying to detect bioengineered pandemics [LW · GW] to alignment research.

Replies from: Raemon, lahwran
comment by Raemon · 2024-08-02T16:04:17.030Z · LW(p) · GW(p)

I don’t know if you should switch offhand, but I notice this post of yours is pretty old and a lot has changed. 

What’s your rough assessment of AI risk now?

If you haven’t thought about it explicitly, I think it’s probably worth spending a week thinking about. 

Also, how many people work on your current project? If you left, would that tank the project or be pretty replaceable? (Or: if you left are there other people who might then be more likely to leave?). 

Replies from: jkaufman
comment by jefftk (jkaufman) · 2024-08-02T18:58:18.964Z · LW(p) · GW(p)

What’s your rough assessment of AI risk now?

I think it's pretty important and I'm glad a bunch of people are working on it. I seriously considered switching into it in spring 2022 before deciding to go into biorisk

Also, how many people work on your current project? If you left, would that tank the project or be pretty replaceable?

We're pretty small (~7), and I've recently started leading our near-term first team (four people counting me, trying to hire two more). I think I'm not very replaceable: my strengths are very different from others on the team in a highly complementary way, especially from a "let's get a monitoring system up and running now" perspective.

(I must admit to some snark in my short response to Mako above. I'm mildly grumpy about people going around as if alignment is literally the only thing that matters. But that's also not really what he was saying, since he was pushing back against my worrying about dragons and not my day job.)

Replies from: Raemon
comment by Raemon · 2024-08-02T19:02:04.842Z · LW(p) · GW(p)

Nod. I had initially remembered the Superintelligence Risk Project being more recent than 2017, was there a 2022 writeup of your decisionmaking? 

comment by the gears to ascension (lahwran) · 2024-08-02T21:16:46.175Z · LW(p) · GW(p)

I think you should think about how your work generalizes between the topics, and try to make it possible for alignment researchers to take as much as they can from it; this is because I expect software pandemics are going to become increasingly similar to wetware pandemics, and so significant conceptual parts of defenses for either will generalize somewhat. That said, I also think that the stronger form of the alignment problem is likely to be useful to you directly on your work anyway; if detecting pandemics in any way involves ML, you're going to run into adversarial examples, and will quickly be facing the same collapsed set of problems (what objective do I train for? how well did it work, can an adversarial optimization process eg evolution or malicious bioengineers break this? what side effects will my system have if deployed?) as anyone who tries to deploy ML. If you're instead not using ML, I just think your system won't work very well and you're being unambitious at your primary goal, because serious bioengineered dangers are likely to involve present-day ML bio tools by the time they're a major issue.

But I think you in particular are doing something sufficiently important that it's quite plausible to me that you're correct. This is very unusual and I wouldn't say it to many people. (normally I'd just not bother directly saying they should switch to working on alignment because of not wanting to waste their time when I'm confident they won't be worth my time to try to spin up, and I instead just make noise about the problem vaguely in people's vicinity and let them decide to jump on it if desired.)

comment by Liriodendron · 2024-08-11T14:05:30.298Z · LW(p) · GW(p)

Congrats, we've just solved alignment! The AI now wants to know whether we want our utopia optimized in a way that accounts for dragons or not.

Replies from: MakoYass
comment by mako yass (MakoYass) · 2024-08-11T20:56:14.777Z · LW(p) · GW(p)

It's a lot easier to work through that after you have superintelligent AI. (Everything is.)

comment by Nathan Young · 2024-08-02T20:22:13.392Z · LW(p) · GW(p)

I would perhaps prefer we had a list of three things we don't discuss (say Politics, Race science and Infohazards) and if we want to not discuss a new thing we have to allow discussion of one of those others. Seems better to be clear what isn't being discussed. 

Replies from: Raemon, thomas-kwa, ryan-freeman
comment by Raemon · 2024-08-02T20:33:06.750Z · LW(p) · GW(p)

See also Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists [LW · GW]. Which goes into how people who don't know that there's some implicit consensus not to talk about some things can come away confused and with wrong beliefs.

comment by Thomas Kwa (thomas-kwa) · 2024-08-03T03:55:35.605Z · LW(p) · GW(p)

I'm pro being clear about what we don't discuss, but it's unreasonable to limit the list to three. The number of topics that is net negative to discuss is just a fact about the world and is probably over three, and I would rather not have people talk about the 4th worst controversial topic just because we uncover three even more pointless and controversial ones.

Politics also seems inadvisable to ban because it's too broad.

comment by Purple bus (ryan-freeman) · 2024-08-08T14:03:18.136Z · LW(p) · GW(p)

I think implementing this policy would require frequent publicization of what issues are on the current list and frequent debates and votes on whether to replace one topic on the list with another topic not yet on the list and would therefore have the opposite of the intended effect. Maybe there's a clever way to get around those issues, but I doubt it.