Communicating effectively under Knightian norms

post by Richard_Ngo (ricraz) · 2023-04-03T22:39:58.350Z · LW · GW · 54 comments

Contents

  Endnotes: Some further clarifications on why Knightian norms make sense:
None
55 comments

tl;dr: rationalists concerned about AI risk often make claims that others consider not just unjustified, but unjustifiable using their current methodology, because of high-level disagreements about epistemology. If you actually want to productively discuss AI risk, make claims that can be engaged with by others who have a wide range of opinions about the appropriate level of Knightian uncertainty.

I think that many miscommunications about AI risk are caused by a difference between two types of norms for how to talk about the likelihoods of unprecedented events. I'll call these "inside view norms" versus "Knightian norms", and describe them as follows:

I'll give a brief justification of why Knightian norms seem reasonable to me, since I expect they're counterintuitive for most people on LW. On a principled level: when reasoning about complex domains like the future, the hardest part is often "knowing the right questions to ask", or narrowing down on useful categories at all. Some different ways in which a question might be the wrong one to ask:

I therefore consider Knightian norms to be appropriate when you're reasoning about a domain in which these considerations seem particularly salient. I give some more clarifications at the end of the post (in particular on why I think Knightian norms are importantly different from modesty norms). However, I'm less interested in debating the value of Knightian norms directly, and more interested in their implications for how to communicate. If one person is following inside view norms and another is following Knightian norms, that can cause serious miscommunications between them, especially if they disagree about which level of Knightian uncertainty is appropriate. So I endorse the following principle:

Communicate about your beliefs in ways that are robust to the level of Knightian uncertainty that your listeners interpret you as incorporating into your claims.

The justification for this principle is simple: we should prefer to focus discussions on the object-level models that we intended to communicate about, rather than on higher-level epistemological questions like "how much Knightian uncertainty is reasonable?"

Here's one hypothetical dialogue which fails to follow this principle:

Person 1: "My credence in AI x-risk is 90%"
Person 2: "But how can you have such high credence in such an unprecedented possibility, which it's very hard to gather direct empirical evidence about?"
Person 1: "I've thought about it a lot, and I see vanishingly few plausible paths to survival."
Person 2: "The idea that you can forecast these paths to the degree required to justify that level of confidence is incredibly implausible. Nobody can do that."
Person 1: [gets frustrated]
Person 2: [gets frustrated]

Some notes on this dialogue:

By contrast, suppose that person 1 started by saying: "I've thought about the future of AI a lot, and I see vanishingly few plausible paths by which we survive." Person 2 might follow Knightian norms in which this epistemic state justifies a 10% credence in catastrophe, or a 50% credence, or a 90% credence. Importantly, though, that doesn't matter very much! In any of those cases they're able to directly engage with person 1's reasons for thinking there are few paths to survival, without getting sidetracked into the question of whether or why they follow Knightian norms to a different extent. (At some point the Knightian disagreement does become action-relevant - e.g. someone who believes in very strong Knightian uncertainty would want to focus much more on "preparing for the unexpected". But in practice I think it often has little or no action-relevance, because most of the time the best strategy is whichever one is best justified by your actual models.)

Am I just recommending "don't use credences directly in cases where others are likely following Knightian norms?" I do think that's a good first step, but I also think there's more to it than that. In particular, I advise:

I think relatively simple changes in accordance with these principles could significantly improve communication on these topics (with the main downside being a small loss of rhetorical force). So I strongly urge that people communicating outside the LW bubble (about AI x-risk in particular) follow these principles in order to actually communicate about the things they want to communicate about, rather than communicating implications like "I have a greater ability to predict the future than you thought humanly possible", which is what's currently happening.

 

Endnotes: Some further clarifications on why Knightian norms make sense:

  1. Isn't it self-contradictory to try to estimate "unquantifiable" Knightian uncertainty? It is if you try to use the content of your model to make the estimate; but not if you use the type of evidence in favor of the model - how long you've thought about the topic, how many successful predictions you've made so far, etc. The stronger the types of evidence available, the less Knightian uncertainty you should estimate.
  2. Don't Knightian norms lead people to report credences which don't sum to 1? Well, it depends. The best way to give explicit credences under Knightian norms probably looks something like "X% true, Y% false, Z% Knightian uncertainty". But in practice people (including myself) often round this off to "X% true, Y+Z% false", where the "true" side is the one that they consider to have the burden of proof. However, this is a pretty unprincipled approach which presumably gives rise to many inconsistencies. That's part of why I advocate for not reporting credences in these domains.
  3. Knightian norms are closely related to modesty norms, but also importantly different. Knightian norms are just about your inability to dodge all the unknown unknowns; whether others agree or disagree is immaterial. In practice they're often invoked together, but for the sake of this post I'll treat them as fully separate.
  4. An important aspect of Knightian uncertainty is that it's often not very decision-relevant, because it's hard for possibilities you can't conceptualize to inform your actual actions in the world. So you might wonder: why include it in your reports at all? My answer here is that it's often hard to tell where regular uncertainty ends and Knightian uncertainty begins, and so reporting your full uncertainty, including the Knightian component, is a good Schelling point.

54 comments

Comments sorted by top scores.

comment by dxu · 2023-04-04T00:43:44.160Z · LW(p) · GW(p)

The sequence starting with this post [LW · GW] seemed to me at the time I read it to be a good summary of reasons to reject "Knightian" uncertainty as somehow special, and it continues to seem that way as of today.

Replies from: Jsevillamol
comment by Jsevillamol · 2023-04-04T06:38:19.228Z · LW(p) · GW(p)

Note that Richard is not treating knightian uncertainty as special and unquantifiable, but instead is giving examples of how to treat it like any other uncertainty, that he is explicitly quantifying and incorporating in his predictions.

I'd prefer calling Richard's "model error" to separate the two, but I'm also okay appropriating the term as Richard did to point to something coherent.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2023-04-04T14:29:55.224Z · LW(p) · GW(p)

Yeah, so I have mixed feelings about this. One problem with the Knightian uncertainty label is that it implies some level of irreducibility; as Nate points out in the sequence (and James points out below), there are in fact a bunch of ways of reducing it, or dividing it into different subcategories.

On the other hand: this post is mainly not about epistemology, it's mainly about communication. And from a communication perspective, Knightian uncertainty points at a big cluster of things that form a large proportion of the blocker between rationalists and non-rationalists communicating effectively about AI. E.g. as Nate points out:

That said, many of the objections made by advocates of Knightian uncertainty against ideal Bayesian reasoning are sound objections: the future will often defy expectation. In many complicated scenarios, you should expect that the correct hypothesis is inaccessible to you. Humans lack introspective access to their credences, and even if they didn't, such credences often lack precision.

Most of the advice from the Knightian uncertainty camp is good. It is good to realize that your credences are imprecise. You should often expect to be surprised. In many domains, you should widen your error bars. But I already know how to do these things [? · GW].

So if you have the opinion that Nate and many other rationalists don't know how to do these things enough, then you could either debate them about epistemology, or you could say "we have different views about how much you should do this cluster of things that Knightian uncertainty points to, let's set those aside for now and actually just talk about AI". I wish I'd had that mental move available to me in my conversations with Eliezer so that we didn't get derailed into philosophy of science; and so that I spent more time curious and less time annoyed at his overconfidence. (And all of that applies orders of magnitude more to mainstream scientists/ML researchers hearing these arguments.)

comment by Vaniver · 2023-04-04T01:18:09.881Z · LW(p) · GW(p)

I think that it's likely sometimes the case that this is just a miscommunication, but I think often the disagreement is just straightforwardly about epistemology / how much Knightian uncertainty is justified. Like, considering Tyler Cowen's recent post, I see the primary content as being (paraphrased) "our current civilization is predicated on taking an optimistic view to Knightian uncertainty / letting people act on their own judgment, and we should keep doing what worked in the past." The issue isn't that Eliezer said ">90% doom" instead of "at least 10% doom", it's that Eliezer thinks this isn't the right way to reason about AI as compared to other technologies. 

[To the extent you're just saying "look, this is an avoidable miscommunication that's worth paying the price to avoid", I think I agree; I'm just suspecting the disagreement isn't just surface-level.]

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2023-04-04T14:41:59.849Z · LW(p) · GW(p)

I guess this depends on what you mean by "surface-level". I do think there are deep philosophical disagreements here about Knightian uncertainty. I just don't know whether they matter much, conditional on actually getting people to sit down and talk details about AI. (I don't know if the miscommuncation with Tyler specifically is avoidable or not, but it seems like we agree that it is in many other cases.)

Maybe one case you might make is: look, MIRI thinks the people working on ML safety at big labs aren't doing anything useful, and they know all the arguments, and maybe getting them to converge on the Knightian uncertainty stuff is important in bridging the remaining gap.

But even from that perspective, that sure seems like very much "step 2", where step 1 is convincing people "hey, this AI risk stuff isn't crazy" - which is the step I claim this post helps with.

(And I strong disagree with that perspective regardless, I think that the key step here is just making the arguments more extensively, and the fact that all the unified cases for AI risk have been written by more ML-safety-sympathetic people like me, Ajeya, and Joe (with the single exception of "AGI ruin" EDIT: and of course Superintelligence) is indicative that that strategy mostly hasn't been tried.)

Replies from: So8res
comment by So8res · 2023-04-04T23:44:46.510Z · LW(p) · GW(p)

the fact that all the unified cases for AI risk have been written by more ML-safety-sympathetic people like me, Ajeya, and Joe (with the single exception of "AGI ruin") is indicative that that strategy mostly hasn't been tried.

I'm not sure what you mean by this, but here's half-a-dozen "unified cases for AI risk" made by people like Eliezer Yudkowsky, Nick Bostrom, Stuart Armstrong, and myself:

2001 - https://intelligence.org/files/CFAI.pdf
2014 - https://smarterthan.us/
2014 - Superintelligence
2015 - https://intelligence.org/2015/07/24/four-background-claims/
2016 - https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/
2017 - https://intelligence.org/2017/04/12/ensuring/

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2023-04-05T04:27:55.817Z · LW(p) · GW(p)

There's a type signature that I'm trying to get at with the "unified case" description (which I acknowledge I didn't describe very well in my previous comment), which I'd describe as "trying to make a complete argument (or something close to it)". I think all the things I was referring to meet this criterion; whereas, of the things you listed, only Superintelligence seems to, with the rest having a type signature more like "trying to convey a handful of core intuitions". (CFAI may also be in the former category, I haven't read it, but it was long ago enough that it seems much less relevant to questions related to persuasion today.)

It seems to me that this is a similar complaint as Eliezer's when he says in List of Lethalities:

"The fact that, twenty-one years into my entering this death game, seven years into other EAs noticing the death game, and two years into even normies starting to notice the death game, it is still Eliezer Yudkowsky writing up this list, says that humanity still has only one gamepiece that can do that."

except that I'm including a few other pieces of (ML-safety-sympathetic) work in the same category.

comment by Lauro Langosco · 2023-04-04T14:08:58.850Z · LW(p) · GW(p)

I agree with what I read as the main direct claim of this post, which is that it is often worth avoiding making very confident-sounding claims, because it makes it likely for people to misinterpret you or derail the conversation towards meta-level discussions about justified confidence.

However, I disagree with the implicit claim that people who confidently predict AI X-risk necessarily have low model uncertainty. For example, I find it hard to predict when and how AGI is developed, and I expect that many of my ideas and predictions about that will be mistaken. This makes me more pessimistic, rather than less, since it seems pretty hard to get AI alignment right if we can't even predict basic things like "when will this system have situational awareness", etc.

Replies from: dxu
comment by dxu · 2023-04-04T23:51:43.945Z · LW(p) · GW(p)

For example, I find it hard to predict when and how AGI is developed, and I expect that many of my ideas and predictions about that will be mistaken. This makes me more pessimistic, rather than less, since it seems pretty hard to get AI alignment right if we can't even predict basic things like "when will this system have situational awareness", etc.

Yes, and this can be framed as a consequence of a more general principle, which is that model uncertainty doesn't save you from pessimistic outcomes unless your prior (which after all is what you fall back to in the subset of possible worlds where your primary inside-view models are significantly flawed) offers its own reasons to be reassured. And if your prior doesn't say that (and for the record: mine doesn't), then having model uncertainty doesn't actually reduce P(doom) by very much!

comment by James Payor (JamesPayor) · 2023-04-04T00:28:28.781Z · LW(p) · GW(p)

Isn't there a third way out? Name the circumstances under which your models break down.

e.g. "I'm 90% confident that if OpenAI built AGI that could coordinate AI research with 1/10th the efficiency of humans, we would then all die. My assessment is contingent on a number of points, like the organization displaying similar behaviour wrt scaling and risks, cheap inference costs allowing research to be scaled in parallel, and my model of how far artificial intelligence can bootstrap. You can ask me questions about how I think it would look if I were wrong about those."

I think it's good practice to name ways your models can breakdown that you think are plausible, and also ways that your conversational partners may think are plausible.

e.g. even if I didn't think it would be hard for AGI to bootstrap, if I'm talking to someone for whom that's a crux, it's worth laying out that I'm treating that as a reliable step. It's better yet if I clarify whether it's a crux for my model that bootstrapping is easy. (I can in fact imagine ways that everything takes off even if bootstrapping is hard for the kind of AGI we make, but these will rely more on the human operators continuing to make dangerous choices.)

Replies from: LosPolloFowler, JamesPayor
comment by Stephen Fowler (LosPolloFowler) · 2023-04-04T03:37:45.408Z · LW(p) · GW(p)

I may be misunderstanding but does this miss the category of unknown unknowns that are captured by Knightian uncertainty? For extremely out of distribution scenarios you may not be able to accurately anticipate even the general category of ways things could go wrong. 

Replies from: JamesPayor
comment by James Payor (JamesPayor) · 2023-04-04T05:15:34.817Z · LW(p) · GW(p)

Idk if it's actually missing that?

I can talk about what is in-distribution in terms of a bunch of finite components, and thereby name the cases that are out of distribution: those in which my components break.

(This seems like an advantage inside views have, they come with limits attached, because they build a distribution out of pieces that you can tell are broken when they don't match reality.)

My example doesn't talk about the probability I assign on "crazy thing I can't model", but such a thing would break something like my model of "who is doing what with the AI code by default".

Maybe it would have been better of me to include a case for "and reality might invalidate my attempt at meta-reasoning too"?

Replies from: LosPolloFowler
comment by Stephen Fowler (LosPolloFowler) · 2023-04-04T07:57:49.450Z · LW(p) · GW(p)

Your comment has made me think I've misunderstood something fundamental. 

Replies from: JamesPayor
comment by James Payor (JamesPayor) · 2023-04-04T12:11:00.142Z · LW(p) · GW(p)

Hm, sorry! I don't think a good reply on my part should do that :P

I think I'm rejecting a certain mental stance toward unknown-unknowns, and I don't think I'm clearly pointing at it yet.

comment by James Payor (JamesPayor) · 2023-04-04T00:31:04.877Z · LW(p) · GW(p)

tl;dr I think you can improve on "my models might break for an unknown reason" if you can name the main categories of model-breaking unknowns

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2023-04-04T04:10:03.373Z · LW(p) · GW(p)

If you know what it'll look like if you were wrong about any of these, then that would be uncertainty that's captured by your model.

Obviously there's a spectrum here, but it sounds like you're describing something significantly more specific than the central examples of Knightian uncertainty in my mind. E.g. to me Knightian uncertainty looks less like uncertainty about how far AI can bootstrap, and more like uncertainty about whether our current conception of "bootstrapping" will even turn out to be a meaningful and relevant concept.

Replies from: JamesPayor
comment by James Payor (JamesPayor) · 2023-04-04T12:03:04.830Z · LW(p) · GW(p)

My nearby reply [LW(p) · GW(p)] is most of my answer here. I know how to tell when reality is off-the-rails wrt to my model, because my model is made of falsifiable parts. I can even tell you about what those parts are, and about the rails I'm expecting reality to stay on.

When I try to cache out your example, "maybe the whole way I'm thinking about bootstrapping isn't meaningful/useful", it doesn't seem like it's outside my meta-model? I don't think I have to do anything differently to handle it?

Specifically, my "bootstrapping" concept comes with some concrete pictures of how things go. I currently find the concept "meaningful/useful" because I expect these concrete pictures to be instantiated in reality. (Mostly because I think expect reality to admit the "bootstrapping" I'm picturing, and I expect advanced AI to be able to find it). If reality goes off-my-rails about my concept mattering, it will be because things don't apply in the way I'm thinking, and there were some other pathways I should have been attending to instead.

comment by Richard_Ngo (ricraz) · 2023-04-12T15:03:24.425Z · LW(p) · GW(p)

Just stumbled upon this post by Nate [LW · GW] where he describes how he... hacked his System 1 to ignore any Knightian uncertainty and unknown unknowns? Which is, like... the textbook way to make sure that you're wildly uncalibrated a few years down the line, and in fact precisely what has happened. Man.

I have invoked Willful Inconsistency on only two occasions, and they were similar in nature. Only one instance of Willful Inconsistency is currently active, and it works like this:

I have completely and totally convinced my intuitions that unfriendly AI is a problem. A big problem. System 1 operates under the assumption that UFAI will come to pass in the next twenty years with very high probability.

You can imagine how this is somewhat motivating.

On the conscious level, within System 2, I'm much less certain. I solidly believe that UFAI is a big problem, and that it's the problem that I should be focusing my efforts on. However, my error bars are far wider, my timespan is quite broad. I acknowledge a decent probability of soft takeoff. I assign moderate probabilities to a number of other existential threats. I think there are a large number of unknown unknowns, and there's a non-zero chance that the status quo continues until I die (and that I can't later be brought back). All this I know.

But, right now, as I type this, my intuition is screaming at me that the above is all wrong, that my error bars are narrow, and that I don't actually expect the status quo to continue for even thirty years.

This is just how I like things.

Replies from: Quadratic Reciprocity
comment by Quadratic Reciprocity · 2023-04-13T18:21:16.869Z · LW(p) · GW(p)

Wow, the quoted text feels scary to read. 

I have met people within effective altruism who seem to be trying to do scary, dark things to their beliefs/motivations, which feels in the same category, like trying to convince themselves they don't care about anything besides maximising impact or reducing x-risk. The latter, in at least one case, by thinking lots about dying due to AI to start caring about it more, which can't be good for thinking clearly in the way they described it. 

comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-04T01:45:07.905Z · LW(p) · GW(p)

I find Knightian uncertainty compelling only if the Knightian is making a principled argument. If they can explain why we must be Knightians on this specific topic, and if that rationale seems consistent with their other views, then I am fine with it.

If you are a Knightian because 'well, this just feels intuitively unpredictable to me,' fine, but recognize that support for this argument ends with our trust in your intuition, and recognize that if you give hard number estimates and confident predictions in fielding questions about other similarly unpredictable-seeming situations, we'll have reason to doubt if you have any kind of principled epistemic stance other than trusting your gut.

This is particularly true when we can observe the system completely and intervene in it continuously over time. We can't predict a game of pinball past n bounces, but if you are playing a game of pinball, you can, with sufficient skill, both control the game to avoid uncertainty and predict a sufficient number of bounces ahead to keep the game going for an arbitrary amount of time.

Gwern has a great comment here [LW · GW] emphasizing that, while chaotic systems do exist in which our ability to make predictions faces quantifiably hard physical limits within certain constraints, those contraints are cruxy and need to be the focus of debate.

The nice thing about non-Knightian arguments is that they expose themselves completely to debate. "Here is my twelve-step argument for AI risk" offers you twelve explicit steps that you can individually attack. Knightian uncertainty ought to offer you a similar twelve-step argument for why we just can't know. The pinball post [LW · GW] gives such an argument for why even a superintelligence can't predict a game of pinball (within the specified contraints). That is virtuous Knightianism - a limited argument for a defined quantity of absolute uncertainty, subject to specific contraints.

So while I do agree that it's good to make arguments in terms that are compatible with the views of a person with greater Knightian uncertainty, I think it is also important for people with high levels of Knightian uncertainty to articulate the specific reasons why they hold that view in a way that opens them up for debate, in the same way that non-Knightian arguments can be debated.

Replies from: D0TheMath, martin-randall
comment by Garrett Baker (D0TheMath) · 2023-04-04T05:29:11.981Z · LW(p) · GW(p)

I find Knightian uncertainty compelling only if the Knightian is making a principled argument. If they can explain why we must be Knightians on this specific topic, and if that rationale seems consistent with their other views, then I am fine with it.

I have yet to see someone take a real world scenario and make a principled argument for why Knightian uncertainty is necessary. Do you know of any examples?

Replies from: AllAmericanBreakfast, TAG
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-04T06:09:32.205Z · LW(p) · GW(p)

No, although I assume that says more about me than Knightian thought.

Edit: see my other comment responding to Martin above.

comment by TAG · 2023-04-04T17:13:08.277Z · LW(p) · GW(p)

Why it's existence is necessary, or why taking into account is necessary?

Replies from: D0TheMath
comment by Garrett Baker (D0TheMath) · 2023-04-04T21:13:54.845Z · LW(p) · GW(p)

Why taking it into account is useful. Bonus points for a situation where the method described doesn’t just approximate Bayesian inference.

Replies from: TAG
comment by TAG · 2023-04-05T00:13:38.194Z · LW(p) · GW(p)

If there are unknown unknowns, probability cannot even be quantified in a principled way

The one useful thing about unknown unknowns is that they apply equally to everything...so.they cannot change the ranking, the order , of your hypotheses, even if they stop you being able to quantify them.

And they do stop you being able to quantify them. If some unknown unknown could defeat your hypotheses, then you have no idea how true they are in absolute terms, how close they are to 1.000. Having a rule that 1.000 is not a probability does not help , because it doesn't tell you how close you can get to 1.000.

Replies from: D0TheMath
comment by Garrett Baker (D0TheMath) · 2023-04-05T12:41:32.867Z · LW(p) · GW(p)

This sounds either useless or like the kind of thing I can indeed quantify. To the extent this consideration changes my actions, for instance, making me advocate against AI regulation because how sure could I possibly be in AI risk, it implicitly is arguing in favor of the set of worlds where my interests are better served by not having AI regulation. It is making the prediction that for reasons I know not AI will not kill everyone. Now I want to know whether this is actually true. Are there in fact more unknown unknowns which produce this outcome? I think I use counting arguments or Bayesianism for this, and then quantify the results.

If this were Knightian uncertainty, I think I would start finding Vanessa Kosoy’s argument that I should be using worst case guarantees more intuitively appealing. But I don’t here, so I think you may be misrepresenting Knightians.

Replies from: TAG
comment by TAG · 2023-04-05T12:54:00.913Z · LW(p) · GW(p)

This sounds either useless

It's not supposed to be useful: it's supposed to be true. Rationalists should not fool themselves about how accurate they are being.

or like the kind of thing I can indeed quantify.

Again, "can quantify" can only mean "can make a blind guess about". They're unknown unknowns.

To the extent this consideration changes my actions, for instance, making me advocate against AI regulation because how sure could I possibly be in AI risk, it implicitly is arguing in favor of the set of worlds where my interests are better served by not having AI regulation.

I wasn't implying anything specifically about AI regulation. I did state that the ordering of your preferences remains unaffected: you are repeating that back to me as though it is an objection.

Replies from: D0TheMath, TAG
comment by Garrett Baker (D0TheMath) · 2023-04-05T13:08:56.269Z · LW(p) · GW(p)

It's not supposed to be useful: it's supposed to be true.

I asked for something useful in the first place. I don’t care about self-proclaimed useless versions of the statement “you may be wrong”. It has no use! That is not what most mean when they talk about Knightian uncertainty, I know that, because they usually say the Knightian uncertainty means you should so something different from what you would do otherwise.

Replies from: TAG
comment by TAG · 2023-04-05T13:12:38.911Z · LW(p) · GW(p)

I asked for something useful in the first place.

You did, but you should not ignore inconvenient truths, whether or not you want to. Rationality is normative.

That is not what most mean when they talk about Knightian uncertainty, I know that, because they usually say the Knightian uncertainty means you should so something different from what you would do otherwise.

Who's "they" ... Cowan, or every Knightian ever?

Replies from: D0TheMath
comment by Garrett Baker (D0TheMath) · 2023-04-05T13:14:21.556Z · LW(p) · GW(p)

I would rather not continue this conversation, have a good day.

comment by TAG · 2023-04-06T13:44:46.962Z · LW(p) · GW(p)

To refine that a bit, KU affects the absolute probability of your hypotheses, but not the relative probability. So it shouldn't affect what you believe, but it should affect how strongly you believe it. How strongly you believe something can easily have an impact on what you do: for instance, you should not start WWIII unless you are very certain it is the best course of action.

You can incorporate KU into Bayes by reserving probability mass for stuff you don't know or haven't thought of, but you are not forced to. (As a corolllary, Bayesians who offer very high probabilities aren't doing so). If the arguments for KU are correct, you should do that as well. Probability estimates are less useful than claims of certainty, but Bayesians are using them because their epistemology tells them that certainty isn't avalaible: similarly absolute probability should be abandoned if it is not available.

comment by Martin Randall (martin-randall) · 2023-04-04T02:01:29.339Z · LW(p) · GW(p)

Is the pinball problem really Knightian Uncertainty? I think we can form a model of the problem that tells us what we can know about the path of the ball, and what we can't. We can calculate how our uncertainty grows with each bounce. I thought Knightian Uncertainty was more related to questions like "what if there is a multiball bonus if you hit the bumpers in the right order?"

Replies from: AllAmericanBreakfast, AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-04T16:53:01.060Z · LW(p) · GW(p)

Let me say a little more about the "is this Knightian uncertainty" question.

Here are some statements about Knightian uncertainty from the Wikipedia page:

In economics, Knightian uncertainty is a lack of any quantifiable knowledge about some possible occurrence, as opposed to the presence of quantifiable risk (e.g., that in statistical noise or a parameter's confidence interval). The concept acknowledges some fundamental degree of ignorance, a limit to knowledge, and an essential unpredictability of future events...

However, the concept is largely informal and there is no single best formal system of probability and belief to represent Knightian uncertainty...

Taleb asserts that Knightian risk does not exist in the real world, and instead finds gradations of computable risk.

Qualitatively, we can say that there is no widely accepted formal definition of Knightian uncertainty, and it's disputed whether it is actually a meaningful concept at all.

The Ellsberg paradox is taken to illustrate Knightian uncertainty - a barrel either holds 2/3 yellow and 1/3 black balls, or 2/3 black and 1/3 yellow balls, but you don't know which.

Personally, I just don't see a paradox here. You start with probability uniformly distributed, and in this case, you have no other evidence to update with, so you assign an equal 50% chance to the possibility that the barrel is majority-black or majority-yellow. If I had some psychological insight into what the barrel-filler would do, then I can update based on that information.

In another MIT description of Knightian uncertainty, they offer another example:

An airline might forecast that the risk of an accident involving one of its planes is exactly one per 20 million takeoffs. But the economic outlook for airlines 30 years from now involves so many unknown factors as to be incalculable.

First of all, this doesn't seem entirely incalculable (assuming we can come up with a definition of 'economic outlook'). If we want to know, say, the airline miles per year, we can pick a range from 0 to an arbitrarily high number X and say "I'm at least 99% sure it's between 0 and X." And maybe we are 70% confident that the economy will grow between now and then, and so we can say we're even more confident that it's between [current airline miles per year, X]. And so once again, while our error bars are wide, there's nothing literally incalculable here.

The same article also acknowledges the controversy with a reverse spin in which almost everything is Knightian, and non-Knightian risk is only when risks are precisely calculable:

Some economists have argued that this distinction is overblown. In the real business world, this objection goes, all events are so complex that forecasting is always a matter of grappling with “true uncertainty,” not risk; past data used to forecast risk may not reflect current conditions, anyway. In this view, “risk” would be best applied to a highly controlled environment, like a pure game of chance in a casino, and “uncertainty” would apply to nearly everything else.

And if we go back to Knight himself,

Knight distinguished between three different types of probability, which he termed: “a priori probability;” “statistical probability” and “estimates”. The first type “is on the same logical plane as the propositions of mathematics;” the canonical example is the odds of rolling any number on a die. “Statistical probability” depends upon the “empirical evaluation of the frequency of association between predicates” and on “the empirical classification of instances”. When “there is no valid basis of any kind for classifying instances”, only “estimates” can be made.

So in fact, even under Knightian uncertainty, we can still make estimates! We don't have to throw up our hands and say "I just don't know, we're in a separate magisterium because this uncertainty is Knightian!" We are just saying 'I can't deduce the probabilities from mathematical argument, I don't have a precise definition of the probability distribution, and so I must estimate what the outcomes might be and how likely they are."

And that is exactly what people who put hard-number estimates on the likelihood of AI doom are doing. When Scott Alexander says "33% risk of AI doom" or Eliezer puts it at 90%, they are making estimates, and that is clearly a display of Knightian uncertainty as Knight would have defined it himself.

When others say "no, you can't put any sort of hard probability on it, don't even make an estimate!" they are not displaying Knightian uncertainty, they're just rejecting the debate topic entirely.

Overall, as I delve into this, the examples of uncertainty purported to be Knightian just seem to be the sort of thing superforecasters have to estimate. Everything on Metaculus is an exercise in dealing with Knightian uncertainty. Every score on Metaculus results from forecasters establishing base rates, updating based on inside view considerations and the passage of time, and then turning that into a hard number estimate which gets aggregated. Nothing incalculable or mysterious there.

Replies from: TAG, TAG, TAG
comment by TAG · 2023-04-04T17:25:00.901Z · LW(p) · GW(p)

It's possible to get meaningful results by system 2 processes like explicit calculation...Knights apriori probability .... and also by system 1 processes . But system.1 needs feedback to be accurate ... that makes the difference between educated guesswork and guesswork...and feedback isnt always available.

comment by TAG · 2023-04-04T17:09:28.765Z · LW(p) · GW(p)

So in fact, even under Knightian uncertainty, we can still make estimates!

Nothing can stop you making subjective estimates: plenty of things can stop them being objectively meaningful.

And that is exactly what people who put hard-number estimates on the likelihood of AI doom are doing

What's hard about their numbers? They are giving an exact figure, without an error bar, but that is a superficial appearance....they haven't actually performed a calculation , and they don't actually know anything within +/-1%.

https://www.johndcook.com/blog/2018/10/26/excessive-precision/

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-04T17:24:12.004Z · LW(p) · GW(p)

That’s a reasonable complaint to me! “You can’t use numbers to make estimates because this uncertainty is Knightian” is not.

Replies from: TAG
comment by TAG · 2023-04-04T17:59:15.866Z · LW(p) · GW(p)

Is it unreasonable to require estimates to be meaningful?

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-04T18:28:30.787Z · LW(p) · GW(p)

Define "meaningful" in a way that's unambiguous and clear to a stranger like me, and I'll be happy to give you my opinion/argument!

Replies from: TAG
comment by TAG · 2023-04-04T19:13:01.001Z · LW(p) · GW(p)

The numbers that go into the final estimate are themselves objective , and not pulled out of the air, or anything else beginning with "a'".

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-04T19:36:22.117Z · LW(p) · GW(p)

I think there are ideas about "objectivity" and "meaningfulness" that I don't agree with implicit in your definition.

For example, let's say I'm a regional manager for Starbucks. I go and inspect all the stores, and then, based on my subjective assessment of how well-organized they seem to be, I give them all a number scoring them on "organization." Those estimates seem to me to be "meaningful," in the sense of being a shorthand way of representing qualitative observational information, and yet I would also not say they are "objective," in the sense that "anybody in their right mind would have come to the same conclusion."

These point estimates seem useful on their own, and if the scorer wanted to go further, they could add error bars. We could even add another scorer, normalize their scores, and then compare them and do all sorts of statistics.

On the other hand, I could have several scorers all rank the same Starbucks, then gather then in a room and have them tell me their subjective impressions. It's the same raw data, but now I'm getting the information in the form of a narrative instead of a number.

In all these cases, I claim that we are getting meaningful estimates out of the process, whether represented in the form of a number or in the form of a narrative, and that these estimates of "how organized the regional Starbucks are" is not "Knightianly uncertain" but is just a normal estimate.

Replies from: TAG
comment by TAG · 2023-04-04T20:09:52.884Z · LW(p) · GW(p)

Semantically, you can have "meaningful" information that only means your own subjective impression, and "estimates" that estimate exactly the same thing, and so on.

That's not addressing the actual point. The point is not to exploit the vagueness of the English language. You wouldn't accept monopoly money as payment even though it says "money" in the name.

You are kind of implying that it's unfair of Knightians to reject subjective estimates because they have greater than zero value...but why shouldn't they be entitled to set the threshold somewhere above eta?

Here's a quick argument: there's eight billion people, they've all got opinions, and I have not got the time to listen to them all.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-04T20:20:10.679Z · LW(p) · GW(p)

I'm not sure what you mean.

comment by TAG · 2023-04-04T17:06:47.164Z · LW(p) · GW(p)
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-04T03:18:30.775Z · LW(p) · GW(p)

Knightian uncertainty is not a well-defined concept, which is part of my problem with it. If you give a hard number probability I at least know what you mean.

Replies from: TAG
comment by TAG · 2023-04-04T17:20:48.083Z · LW(p) · GW(p)

If someone gives you a number without telling you how they arrived at it, it doesn't mean anything.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-04T17:28:06.155Z · LW(p) · GW(p)

My beef here is when people supply you with a number and do tell you how they arrived at it, and the Knightian says “this is Knightian uncertainty we are dealing with here, so we have to just ignore your argument and estimate, say ‘we just don’t know,’ and leave it at that.” Sounds like a straw man, but it isn’t.

Replies from: TAG
comment by TAG · 2023-04-04T18:02:46.848Z · LW(p) · GW(p)

Surely that would depend on how they arrived at it? If it's based on objective data, that's not Knightian uncertainty, and if it's based on subjective guesses, then Knightian ls can reject that.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-04T18:27:09.845Z · LW(p) · GW(p)

If it's based on objective data, that's not Knightian uncertainty, and if it's based on subjective guesses, the Knightian can reject that.

What I can say is that there appears to be room to interpret all the examples and definitions of "Knightian uncertainty" in two ways:

  1. It's where we move from a well-defined probabilistic model (i.e. "what's the chance of obtaining a 7 as the sum of two fair die rolls") to having to make judgment calls about how to model the world in order to make forecasts (i.e. "forecasting").
  2. It's where we move from what I'm calling "forecasting" to having to make decisions without any information at all.

Knightian-1 is not exotic, and the examples of Knightian uncertainty I have encountered (things like forecasting the economic state of airline companies 30 years out, or the Ellsburg paradox) seem to be examples of this kind. Knightians can argue with these models, but they can't reject the activity of forecasting as a valid form of endeavor.

Knightian-2 is more exotic, but I have yet to find a clear real-world example of it. It's a theoretical case where it might be proper to reject the applicability of forecasting, but a Knightian would have to make a case that a particular situation is of this kind. I can't even find or invent a hypothetical situation that matches it, and I am unconvinced this is a meaningful real-world concept.

Replies from: TAG
comment by TAG · 2023-04-04T19:16:03.757Z · LW(p) · GW(p)

Do you count guesses and intuitions as data?

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-04T19:28:02.712Z · LW(p) · GW(p)

No. Guesses and intuitions are ways of interpreting or synthesizing data. Data is a way of measuring the world. However, there are subjective/intuitive types of qualitative data. If I am a regional manager for Starbucks, go into one of the shops I'm managing, and come away with the qualitative judgment that "it looks like a shitshow," there is observational data that that judgment is based on, even if I haven't written it down or quantified it.

Replies from: TAG
comment by TAG · 2023-04-04T20:26:27.490Z · LW(p) · GW(p)

Guesses and intuitions are ways of interpreting or synthesizing data

Not exclusively: they can be pretty random.

there is observational data that that judgment is based on, even if I haven’t written it down or quantified it.

What we were discussing was the opposite...a hard number based on nothing.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-04T20:49:38.645Z · LW(p) · GW(p)

A hard number based on literally nothing is not data and is not an interpretation. But that’s not an interesting or realistic case - it doesn’t even fit the idea of “ass numbers,” a person’s best intuitive guess. At least in that case, we can hope that there’s some unconscious aggregation of memory, models of how the world works, and so on coming together to inform the number. It’s a valid estimate, although not particularly trustworthy in most cases. It’s not fundamentally different from the much more legible predictions of people like superforecasters.

Replies from: TAG
comment by TAG · 2023-04-05T13:03:58.275Z · LW(p) · GW(p)

And having said all that, it is not unreasonable to set the bar somewhere higher.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-05T16:39:46.648Z · LW(p) · GW(p)

I’m encouraging crisp distinctions between having a high standard of evidence, an explicit demonstration of a specific limit to our ability to forecast, and an unsubstantiated declaration that an entire broad topic is entirely beyond forecasting.

In the case of AGI, this would mean distinguishing between:

“I’d need a stronger argument and evidence for predicting AGI doom to update my credence any further.”

“Even an AGI can’t predict more than n pinball bounces out into the future given atomic-resolution data from only one moment in time.”

“Nobody can predict what will happen with AGI, it’s a case of Knightian uncertainty and simply incalculable! There are just too many possibilities!”

The first two cases are fine, the third one I think is an invalid form of argument.

comment by MondSemmel · 2023-05-05T15:24:02.120Z · LW(p) · GW(p)

I understand that different groups and cultures have different communication norms. But after reading this essay, I'm more confused about how to communicate probabilities than before. I'm skeptical to which extent you can communicate effectively with another group by changing your communication style, when there are such fundamental differences in your thoughts on epistemology. I suspect that anything short of addressing or confronting those differences head-on won't work.