Self-censorship is probably bad for epistemology. Maybe we should figure out a way to avoid it?

post by DaemonicSigil · 2023-03-19T09:04:42.360Z · LW · GW · 1 comments

Contents

  What can we do?
None
1 comment

Imagine a place where speaking certain thoughts out loud is widely considered to be harmful. Not because they are lies, spam, threats, or insults, most people in most places can agree those are harmful. No, the thing that makes this place unusual is that those who dwell there believe the following: That even if an idea is true, and unlikely to annoy, coerce, or offend the listener, some ideas should still be suppressed, lest they spread across the world and cause great damage. Ideas of this type are deemed "Risky", and people carefully avoid communicating them.

The strange thing is that there is no authoritarian government banning the speaking of Risky ideas. Rather, the people there just seem to have decided that this is what they want for themselves. Most people reason about the world in such a way that it's simply obvious that some ideas are Risky and if we want to have nice things, we ought to avoid saying anything that could be Risky. Not everyone agrees with this, there are a few outliers who don't care too much about Riskiness. But it's common to see such people rebuked by their peers for saying Risky things, or even for suggesting that they might in the future say Risky things. Occasionally some oblivious researcher will propose a project, only to be warned that it's ill-advised to conduct a project of that nature because it could turn up Risky results. When it comes to Risky ideas, self-censorship is the rule, even if it's more of a social norm than a legal rule.

Of course because of this self-censorship, people in this place find it much harder to reason as a group. You can still think freely inside your own head, but if you need to know something that's inside someone else's head, you're likely to have a difficult time. Whenever speaking, they'll have to carefully avoid coming too close to Risky ideas. Sometimes by making an intellectual detour, they will manage to convey a roughly similar notion. More often, they'll opt to simply discard the offending branch of thought. Even when reasoning about topics that are on the surface unrelated to Riskiness, self censorship still gets in the way, slowing down the discussion and introducing errors. How could it not? It's all one causally-connected world. Yet whatever the cost to good epistemology, the people there seem willing to pay it.

You may consider such a place strange, but I assure that all this seems perfectly logical to its people; morally necessary, even. It doesn't occur to many of them that there's any other way things could be. Perhaps to some of you, this place is starting to sound a little familiar?

I am talking, of course, about LessWrong, and the question of "things that advance AI capabilities".

Now, to be clear, I'm not saying that the concept of Riskiness is bullshit in its entirety. While I may disagree about where we should draw the line, I certainly would not publish the blueprints for a microwave nuke, and neither would I publish the design of a Strong AGI. What I am suggesting is that maybe we should consider our inability to talk openly about "capabilities stuff" to be a major handicap, and apply a lot more effort to removing it. We hope to solve AI alignment. If we can't even talk about large chunks of the very problem we're trying to solve, that sounds like we're in trouble.

What can we do?

I encourage everyone to try and think of their own ideas, but here are some that I came up with:

1 comments

Comments sorted by top scores.

comment by Thomas Sepulchre · 2023-03-22T14:57:12.359Z · LW(p) · GW(p)

The title of this post is misleading. This post is not about self-censorship; it is specifically about whether or not "things that advance AI capabilities" should be discussed on LessWrong.