Posts

Openness Norms in AGI Development 2020-03-30T19:02:41.956Z

Comments

Comment by Sublation on Against strong bayesianism · 2020-04-30T14:33:51.769Z · LW · GW

Maybe the qualitative components of Bayes' theorem are, in some sense, pretty basic. If I think about how I would teach the basic qualitative concepts encoded by Bayes' theorem (which we both agree are useful), I can't think of a better way than through directly teaching Bayes' theorem. That is the sense in which I think Bayes' theorem offers a helpful precisification of these more qualitative concepts: it imposes a useful pedagogical structure into which we can neatly fit such principles.

You claim that the increased precision afforded by Bayesianism means that people end up ignoring the bits that don't apply to us, so Bayesianism doesn't really help us out much. I agree that, insofar as we use the formal Bayesian framework, we are ignoring certain bits. But I think that, by highlighting which bits do not apply to us, we gain a better understanding of why certain parts of our reasoning may be good or bad. For example, it forces us to confront why we think making predictions is good (as Bob points out, it allows us to avoid post-hoc rationalisation). This, I think, usefully steers our attention towards more pragmatic questions concerning the role that prediction plays in our epistemic lives, and away from more metaphysical questions about (for example) the real grounds for thinking prediction is an Epistemic Virtue.

So I think we might disagree on the empirical claim of how well we can teach such concepts without reliance on anything like Bayesianism. Perhaps we also have differing answers to the question: 'does engaging with the formal Bayesian framework usefully draw our attention towards parts of our epistemic lives that matter?' Does that sound right to you?

Comment by Sublation on Against strong bayesianism · 2020-04-30T12:05:17.802Z · LW · GW

Could you say a bit more on why you think we should quantify the accuracy of credences with a strictly proper scoring rule, without reference to optimality proofs? I was personally confused about what principled reasons we had to think only strictly proper scoring rules were the only legitimate measures of accuracy, until I read Levinstein's paper offering a pragmatic vindication for such rules.

Comment by Sublation on Against strong bayesianism · 2020-04-30T12:02:08.397Z · LW · GW

I enjoyed this post. I think the dialogue in particular nicely highlights how underdetermined the phrase 'becoming more Bayesian' is, and that we need more research on what optimal reasoning in more computationally realistic environments would look like.

However, I think there are other (not explicitly stated) ways I think Bayesianism is helpful for actual human reasoners. I'll list two:

  • I think the ingredients you get from Bayes' theorem offer a helpful way of making more precise what updating should look like. Almost everyone will agree that we should take into account new evidence, but I think explicitly bearing in mind 'okay, what's the prior?', and 'how likely is the evidence given the hypothesis?', offers a helpful framework which allows us to update on new evidence in a way that's more likely to make us calibrated.
  • Moreover, even thinking in terms of degrees of belief as subjective probabilities at all (and not just how to update them) is a pretty novel conceptual insight. I've spent plenty of time speaking to people with advanced degrees in philosophy, many of whom think by default in terms of disbelief/full belief, and don't have a conception of anything like the framework of subjective probabilities.

Perhaps you agree with what I said above. But I think such points are worth stating explicitly, given that I think they're pretty unfamiliar to most people, and constitute ways in which the Bayesian framework has generated novel insights about good epistemic behaviour.

Comment by Sublation on [deleted post] 2020-04-12T13:11:21.731Z

This was my reconstruction of Caspar's argument, which may be wrong. But I took the argument to be that we should promote consequentialism in the world as we find it now, where Omega (fingers crossed!) isn't going to tell me claims of this sort, and people do not, in general, explicitly optimise for things we greatly disvalue. In this world, if people are more consequentialist, then there is a greater potential for positive-sum trades with other agents in the multiverse. As agents, in this world, have some overlap with our values, we should encourage consequentialism, as consequentialist agents we can causally interact with will get more of what they want, and so we get more of what we want.

Comment by Sublation on [deleted post] 2020-04-12T12:54:59.498Z

I agree with you that choosing the appropriate set of actions is a non-trivial task, and I've said nothing here about how Kantians would choose an appropriate class of actions.

I am unclear on the point of your gang examples. You point out that the ideal maxim changes depending on features of the world. The Kantian claim, as I understand it, says that we should implement a particular decision-theoretic strategy, by focusing on maxims rather than acts. This a distinctively normative claim. The fact that, as we gain more information, the maxims might become increasingly specific seems true, but unproblematic. Likewise, I think it's true that we can describe any agent's decisions in terms of a lookup table over all conceivable situations. However, this just seems to indicate that we are looking at the wrong level of resolution. It's also true that I can describe all agents' behaviour (in principle) terms of fundamental physics. But this isn't to say that there are no useful higher-level descriptions of different agents.

When you say that actual human Kantians offload work to invisble black boxes, do you mean that Kantians, when choosing an appropriate set of actions to make into a maxim, are offloading that clustering of acts into a black box? If so, then I think I agree, and would also like a more formal account of what's going on this case. However, I think a good first step towards such an formal account is looking at more qualitative instances of behaviour from Kantians, so we know what it is we're trying to capture more formally.

Comment by Sublation on Two Alternatives to Logical Counterfactuals · 2020-04-01T11:19:31.441Z · LW · GW

On my current understanding of this post, I think I have a criticism. But I'm not sure if I properly understand the post, so tell me if I'm wrong in my following summary. I take the post to be saying something like the following:

'Suppose, in fact, I take the action A. Instead of talking about logical counterfactuals, we should talk about policy-dependent source code. If we do this, then we can see that initial talk about logical counterfactuals encoded an error. The error is not understanding the following claim: when asking what would have happened if I had performed some action A* A, observing that I do A* is evidence that I had some different source code. Thus, in analysing that counterfactual statement, we do not need to refer to incoherent 'impossible worlds'.

If my summary is right, I'm not sure how policy-dependent source code is a solution to the global accounting problem. This is because the agent, when asking what would have happened if I had done Y, still faces a global accounting problem. This is because the agent must then assume they have some different source code B, and it seems like choosing an appropriate B will be underdetermined. That is, there is no unique source code B to give you a determinate answer about what would have happened if you performed A*. I can see why thinking in terms of policy-dependent source code would be attractive if you were a nonrealist about specifically logical counterfactuals, and a realist about different kinds of counterfactuals. But that's not what I took you to be saying.

Comment by Sublation on Openness Norms in AGI Development · 2020-03-31T09:36:39.779Z · LW · GW

Thanks, that's helpful. Edited.