Against strong bayesianism
post by Richard_Ngo (ricraz) · 2020-04-30T10:48:07.678Z · LW · GW · 65 commentsContents
65 comments
Note that this post has been edited in response to feedback and comments. In particular, I've added the word "strong" into the title, and an explanation of it at the beginning of the post, to be clearer what position I'm critiquing. I've also edited the discussion of Blockhead.
In this post I want to lay out some intuitions about why bayesianism is not very useful as a conceptual framework for thinking either about AGI or human reasoning. This is not a critique of bayesian statistical methods; it’s instead aimed at the philosophical position that bayesianism defines an ideal of rationality which should inform our perspectives on less capable agents, also known as "strong bayesianism". As described here:
The Bayesian machinery is frequently used in statistics and machine learning, and some people in these fields believe it is very frequently the right tool for the job. I’ll call this position “weak Bayesianism.” There is a more extreme and more philosophical position, which I’ll call “strong Bayesianism,” that says that the Bayesian machinery is the single correct way to do not only statistics, but science and inductive inference in general – that it’s the “aspirin in willow bark” that makes science, and perhaps all speculative thought, work insofar as it does work.
Or another way of phrasing the position, from Eliezer [LW · GW]:
You may not be able to compute the optimal [Bayesian] answer. But whatever approximation you use, both its failures and successes will be explainable in terms of Bayesian probability theory.
First, let’s talk about Blockhead: Ned Block’s hypothetical AI that consists solely of a gigantic lookup table. Consider a version of Blockhead that comes pre-loaded with the optimal actions (according to a given utility function) for any sequence of inputs which takes less than a million years to observe. So for the next million years, Blockhead will act just like an ideal superintelligent agent. Suppose I argued that we should therefore study Blockhead in order to understand advanced AI better. Why is this clearly a bad idea? Well, one problem is that Blockhead is absurdly unrealistic; you could never get anywhere near implementing it in real life. More importantly, even though Blockhead gets the right answer on all the inputs we give it, it’s not doing anything remotely like thinking or reasoning.
The general lesson here is that we should watch out for when a purported "idealised version" of some process is actually a different type of thing to the process itself. This is particularly true when the idealisation is unimaginably complex, because it might be hiding things in the parts which we can’t imagine. So let's think about what an ideal bayesian reasoner like a Solomonoff inductor actually does. To solve the grain of truth problem, the set of hypotheses it represents needs to include every possible way that the universe could be. We don't yet have any high-level language which can describe all these possibilities, so the only way to do so is listing all possible Turing machines. Then in order to update the probabilities in response to new evidence, it needs to know how that entire universe evolves up to the point where the new evidence is acquired.
In other words, an ideal bayesian is not thinking in any reasonable sense of the word - instead, it’s simulating every logically possible universe. By default, we should not expect to learn much about thinking based on analysing a different type of operation that just happens to look the same in the infinite limit. Similarly, the version of Blockhead I described above is basically an optimal tabular policy in reinforcement learning. In reinforcement learning, we’re interested in learning policies which process information about their surroundings - but the optimal tabular policy for any non-trivial environment is too large to ever be learned, and when run does not actually do any information-processing! Yet it's particularly effective as a red herring because we can do proofs about it, and because it can be calculated in some tiny environments.
You might argue that bayesianism is conceptually useful, and thereby helps real humans reason better. But I think that concepts in bayesianism are primarily useful because they have suggestive names, which make it hard to realise how much work our intuitions are doing to translate from ideal bayesianism to our actual lives. For more on what I mean by this, consider the following (fictional) dialogue:
Alice the (literal-minded) agnostic: I’ve heard about this bayesianism thing, and it makes sense that I should do statistics using bayesian tools, but is there any more to it?
Bob the bayesian: Well, obviously you can’t be exactly bayesian with finite compute. But the intuition that you should try to be more like an ideal bayesian is a useful one which will help you have better beliefs about the world. In fact, most of what we consider to be “good reasoning” is some sort of approximation to bayesianism.
A: So let me try to think more like an ideal bayesian for a while, then. Well, the first thing is - you’re telling me that a lot of the things I’ve already observed to be good reasoning are actually approximations to bayesianism, which means I should take bayesianism more seriously. But ideal bayesians don’t update on old evidence. So if I’m trying to be more like an ideal bayesian, I shouldn’t change my mind about how useful bayesianism is based on those past observations.
B: No, that’s silly. Of course you should. Ignoring old evidence only makes sense when you’ve already fully integrated all its consequences into your understanding of the world.
A: Oh, I definitely haven’t done that. But speaking of all the consequences - what if I’m in a simulation? Or an evil demon is deceiving me? Should I think about as many such skeptical hypotheses as I can, to be more like an ideal bayesian who considers every hypothesis?
B: Well, technically ideal bayesians consider every hypothesis, but only because they have infinite compute! In practice you shouldn’t bother with many far-fetched hypotheses, because that’s a waste of your limited time.*
A: But what if I have some evidence towards that hypothesis? For example, I just randomly thought of the hypothesis that the universe has exactly a googleplex atoms in it. But there's some chance that this thought was planted in my mind by a higher power to allow me to figure out the truth! I should update on that, right?
B: Look, in practice that type of evidence is not worth keeping track of. You need to use common sense to figure out when to actually make the effort of updating.
A: Hmm, alright. But when it comes to the hypotheses I do consider, they should each be an explicit description of the entire universe, right, like an ideal bayesian’s hypotheses?
B: No, that’s way too hard for a human to do.
A: Okay, so I’ll use incomplete hypotheses, and then assign probabilities to each of them. I guess I should calculate as many significant digits of my credences as possible, then, to get them closer to the perfectly precise real-valued credences that an ideal bayesian has?
B: Don’t bother. Imprecise credences are good enough except when you’re solving mathematically precise problems.
A: Speaking of mathematical precision, I know that my credences should never be 0 or 1. But when an ideal bayesian conditions on evidence they’ve received, they’re implicitly being certain about what that evidence is. So should I also be sure that I’ve received the evidence I think I have?
B: No-
A: Then since I’m skipping all these compute-intensive steps, I guess getting closer to an ideal bayesian means I also shouldn’t bother to test my hypotheses by making predictions about future events, right? Because an ideal bayesian gets no benefit from doing so - they can just make updates after they see the evidence.
B: Well, it’s different, because you’re biased. That’s why science works, because making predictions protects you from post-hoc rationalisation.
A: Fine then. So what does it actually mean to be more like an ideal bayesian?
B: Well, you should constantly be updating on new evidence. And it seems like thinking of degrees of belief as probabilities, and starting from base rates, are both helpful. And then sometimes people conditionalise wrong on simple tasks, so you need to remind them how to do so.
A: But these aren’t just bayesian ideas - frequentists are all about base rates! Same with “when the evidence changes, I change my mind” - that one’s obvious. Also, when people try to explicitly calculate probabilities, sometimes they’re way off.** What’s happening there?
B: Well, in complex real-world scenarios, you can’t trust your explicit reasoning. You have to fall back on intuitions like “Even though my inside view feels very solid, and I think my calculations account for all the relevant variables, there’s still a reasonable chance that all my models are wrong.”
A: So why do people advocate for the importance of bayesianism for thinking about complex issues if it only works in examples where all the variables are well-defined and have very simple relationships?
B: I think bayesianism has definitely made a substantial contribution to philosophy. It tells us what it even means to assign a probability to an event, and cuts through a lot of metaphysical bullshit.
Back to the authorial voice. Like Alice, I'm not familiar with any principled or coherent characterisation of what trying to apply bayesianism actually means. It may seem that Alice’s suggestions are deliberately obtuse, but I claim these are the sorts of ideas you’d consider if you seriously tried to consistently “become more bayesian”, rather than just using bayesianism to justify types of reasoning you endorse for other reasons.
I agree with Bob that the bayesian perspective is useful for thinking about the type signature of calculating a subjective probability: it’s a function from your prior beliefs and all your evidence to numerical credences, whose quality should be evaluated using a proper scoring rule. But for this insight, just like Bob’s insights about using base rates and updating frequently, we don’t need to make any reference to optimality proofs or the idealised limit of intelligence brute force search. In fact, doing so often provides an illusion of objectivity which is ultimately harmful. I do agree that most things people identify as tenets of bayesianism [LW · GW] are useful for thinking about knowledge; but I claim that they would be just as useful, and better-justified, if we forced each one to stand or fall on its own.
* Abram Demski has posted about [LW · GW] moving past bayesianism by accounting for logical uncertainty to a greater extent, but I think that arguments similar to the ones I’ve made above are also applicable to logical inductors (although I’m less confident about this).
** You can probably fill in your own favourite example of this. The one I was thinking about was a post where someone derived that the probability of extinction from AI was less than 1 in 10^200; but I couldn’t find it.
65 comments
Comments sorted by top scores.
comment by Kaj_Sotala · 2020-05-01T17:17:18.848Z · LW(p) · GW(p)
You might argue that bayesianism is conceptually useful, and thereby helps real humans reason better. But I think that concepts in bayesianism are primarily useful because they have suggestive names, which make it hard to realise how much work our intuitions are doing to translate from ideal bayesianism to our actual lives.
This reminds me of an old critique of LW Bayesianism by David Chapman, and the conclusion that we reached in the comment section of it:
The valuable part of LW, for many people, is a collection of simple, practical insights into reasoning, rather than the complex technical framework. [...] The small practical insights [...] are all excellent. [...] I’d suggest that the Bayesian framework is not necessary to understand any of them, and perhaps not helpful (except maybe for “Update Yourself Incrementally”). Maybe this depends on one’s cognitive style. For some people, understanding that all those insights loosely relate to a mathematical framework would be satisfying and helpful; for others, the framework would be difficult to understand and an unnecessary distraction.Replies from: ricraz, ChristianKl
↑ comment by Richard_Ngo (ricraz) · 2020-05-03T20:31:47.274Z · LW(p) · GW(p)
Yes, I saw Chapman's critiques after someone linked one in the comments below, and broadly agree with them.
I also broadly agree with the conclusion that you quote; that seems fairly similar to what I was trying to get at in the second half of the post. But in the first half of the post, I was also trying to gesture at a mistake made not by people who want simple, practical insights, but rather people who do research in AI safety, learning human preferences, and so on, using mathematical models of near-ideal reasoning. However, it looks like making this critique thoroughly would require much more effort than I have time for.
↑ comment by ChristianKl · 2020-05-03T13:46:01.382Z · LW(p) · GW(p)
Chapman's critique was stronger. Chapman's argument doesn't depend on computational ability being finitive.
Replies from: ricraz↑ comment by Richard_Ngo (ricraz) · 2020-05-03T20:40:54.440Z · LW(p) · GW(p)
I think some parts of it do - e.g. in this post. But yes, I do really like Chapman's critique and wish I'd remembered about it before writing this so that I could reference it and build on it.
Especially: Understanding informal reasoning is probably more important than understanding technical methods. I very much agree with this.
Replies from: ChristianKl↑ comment by ChristianKl · 2020-05-04T09:16:26.967Z · LW(p) · GW(p)
If Bayesianism would work for an agent with arbitrary much cognitive power, the eternalism that Chapman criticizes would still be true. Christian belief in a God that escapes full human understanding is still eternalism.
Probability theory does not extend logic is the post where Chapman makes that argument in more depth.
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-04-30T16:33:42.772Z · LW(p) · GW(p)
I object to this point and would be interested to see a defense of it:
Block makes a number of arguments about the nature of comprehension and intelligence based on Blockhead - but many (including Daniel Dennett, and myself) think that these arguments are deeply flawed, and the example of Blockhead is not useful for gaining either conceptual insight or practical inspiration. Why not? Well, it’s absurdly unrealistic; you could never get anywhere near implementing it in real life.
I'll fire back with a quote from David Lewis (paraphrased, I can't find the original) "This possibility is so strange and outlandish that some people refuse to learn anything from it."
If you write out the arguments, they don't depend on Blockhead actually happening in the future, or even actually happening in any world that shares our laws of physics. As far as I can tell. So whether or not Blockhead is realistic is irrelevant.
Edge cases make bad law, but they make great mathematics. Philosophy is more like math than law. QED. (Actually, some parts of philosophy really are more like law than math. I don't think this part is, though.)
Later, you say:
More importantly, even though Blockhead gets the right answer on all the inputs we give it, it’s not doing anything remotely like thinking or reasoning.
Wasn't that exactly the point Block was trying to make with the Blockhead thought experiment? From the paper:
...two systems could have actual and potential behavior typical of familiar intelligent beings, that the two systems could be exactly alike in their actual and potential behavior, and in their behavioral dispositions and capacities and counterfactual behavioral properties (i.e., what behaviors, behavioral dispositions, and behavioral capacities they would have exhibited had their stimuli differed)--the two systems could be alike in all these ways, yet there could be a difference in the information processing that mediates their stimuli and responses that determines that one is not at all intelligent while the other is fully intelligent.
Maybe he later went on to derive other conclusions, and it is those that you object to? I haven't followed the literature as closely as I'd like.
Replies from: ricraz↑ comment by Richard_Ngo (ricraz) · 2020-04-30T19:33:42.173Z · LW(p) · GW(p)
Yeah, actually, I think your counterargument is correct. I basically had a cached thought that Block was trying to do with Blockhead a similar thing to what Searle was trying to do with the Chinese Room. Should have checked it more carefully.
I've now edited to remove my critique of Block himself, while still keeping the argument that Blockhead is uninformative about AI for (some of) the same reasons that bayesianism is.
comment by SarahNibs (GuySrinivasan) · 2020-04-30T19:53:23.252Z · LW(p) · GW(p)
Towards whatever-you-call-the-thing-I-got-from-reading-LW ism:
Thinking about all information systems as fundamentally performing the computation "Result <== Prior x Evidence" has been responsible for 5 of the 7 biggest successes in my career thus far. The other 2 had nothing to do with math/information/probability. All of the 5 were me noticing, where many better educated "more qualified" individuals did not, that some part of the actual information system's implementation was broken w.r.t. "Result <== Prior x Evidence" and figuring out how to phrase the brokenness in some other way ('cause inferential distance), resulting in institutional pressure to fix it.
Replies from: ErickBall↑ comment by ErickBall · 2020-05-03T02:40:09.620Z · LW(p) · GW(p)
Are there any of them you could explain? It would be interesting to hear how that caches out in real life.
Replies from: GuySrinivasan↑ comment by SarahNibs (GuySrinivasan) · 2020-05-03T03:12:34.725Z · LW(p) · GW(p)
A piece of a certain large corporation's spelling/grammar checker was at its heart Result <== Prior x Evidence. Due to legacy code, decaying institutional knowledge, etc., no one knew this. The code/math was strewn about many files. Folks had tweaked the code over the years, allowed parameters to vary, fit those parameters to data.
I read the code, realized that fundamentally it had to be “about” determining a prior, determining evidence, and computing a posterior, reconstructed the actual math being performed, and discovered that two exponents were different from each other, "best fit to the data", and I couldn't think of any good reason they should be different. Brain threw up all sorts of warning bells.
I examined how we trained on the data to determine the values we’d use for these exponents. Turns out, the process was completely unjustifiable, and only seemed to give better results because our test set was subtly part of the training set. Now that's something everyone understands immediately; you don't train on your test set. So we fixed our evaluation process, stopped allowing those particular parameters to float, and improved overall performance quite a bit.
Note, math. Because information and Bayes and correlation and such is unfortunately not simple, it's entirely possible that some type of data is better served by e^(a*ln(P(v|w))-b*ln(P(v|~w))) where a!=b!=1. I dunno. But if you see someone obviously only introducing a and b and then fitting them to data because :shrug: why not, that's when your red flags go up and you realize someone's put this thing together without paying attention. And in this case after fixing the evaluation we did end up leaving a==b != 1, which is just Naive Bayes, basically. a!=b was the really disconcerting bit.
comment by jonathanstray · 2020-04-30T20:47:28.308Z · LW(p) · GW(p)
While Bayesian statistics are obviously a useful method, I am dissatisfied with the way "Bayesianism" has become a stand-in for rationality in certain communities. There are well-developed, deep objections to this. Some of my favorite references on this topic:
- Probability Theory Does Not Extend Logic by Dave Chapman. Part of what is missing from simulating every logically possible universe is indeed reasoning, in the sense that probabilistic inference nicely extends propositional logic but cannot solve problems in first order logic. This is why practical planning algorithms also use tools like tree search.
- Philosophy and the Practice of Bayesian Statistics by Andrew Gelman (who wrote the book on Bayesian methods) and Cosma Shalizi. Abstract:
A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory
- Even the standard construction of probability is potentially suspect. Why is the Dutch Book argument correct? There are serious potential problems with this, as John Norton has discussed The Material Theory of Induction, which also discusses the shortcomings of Bayesianism as a foundational inference method (chapter 10).
- Bayesianism is ultimately a mathematical formalism that is actually useful only to the extent that two quantifications succeed in practice: the quantification of the state of the world into symbolic form, and the quantification of potential interventions and their results (necessary if we wish to connect reasoning to causation). There are many choices which need to be made both at the conceptual and the practical level when quantifying, as I have tried to discuss here in the context of data journalism. Quantification might be the least studied branch of statistics.
Finally, I'd note that explicitly Bayesian calculation is rarely used as the top level inference framework in practical decision-making, even when the stakes are high. I worked for a decade as a data journalist, and you'd think that if Bayesianism is useful anywhere then data journalists would use it to infer the truth of situations. But it is very rarely useful in practice. Nor is Bayesianism the primary method used in e.g. forecasting and policy evaluation. I think it's quite instructive to ask why not, and I wish there was more serious work on this topic.
In short: Bayesianism is certainly foundational, but it is not a suitable basis for a general theory of rational action. It fails on both theoretical and practical levels.
Replies from: aa-m-sa↑ comment by Aaro Salosensaari (aa-m-sa) · 2020-05-04T23:29:09.257Z · LW(p) · GW(p)
I am happy that you mention Gelman's book (I am studying it right now). I think lots of "naive strong bayesianists" would improve from a thoughtful study of the BDA book (there are lots of worked out demos and exercises available for it) and maybe some practical application of Bayesian modelling to some real-world statistical problems. The practice of "Bayesian way of life" of "updating my priors" sounds always a bit too easy in contrast to doing a genuine statistical inference.
For example, a couple of puzzles I am still myself unsure how to answer properly and with full confidence: Why one would be interested in doing stratified random sampling with your epidemiological study instead of naive "collect every data point that you see and then do a Bayesian update?" Or how multiple comparisons corrections for classical frequentist p-values map into Bayesian statistical framework? Does it matter for LWian Bayesianism if you are doing your practical statistical analyses with frequentist or Bayesian analysis tools (especially if many frequentist methods can be seen as clever approximations to full Bayesian model, see e.g. discussion of Kneser-Ney smoothing as ad hoc Pitman-Yor process inference here: https://cs.stanford.edu/~jsteinhardt/stats-essay.pdf ; similar relationship exists between k-means and EM-algorithm of Gaussian mixture model.) And if there is no difference, is the philosophical Bayesianism then actually that important -- or important at all -- for rationality?
comment by johnswentworth · 2020-04-30T18:30:31.761Z · LW(p) · GW(p)
I do agree that most things people identify as tenets of bayesianism [LW · GW] are useful for thinking about knowledge; but I claim that they would be just as useful, and better-justified, if we forced each one to stand or fall on its own.
So we have this machine with a track record of cranking out really useful tools for thinking and reasoning. But it would be more useful, and better-justified, if we considered each of these tools on its own merits, rather than thinking that it's likely to be useful just because it came from the machine.
... That does seem like a very self-consistent claim for someone arguing against Bayesianism.
Replies from: ricraz↑ comment by Richard_Ngo (ricraz) · 2020-05-01T10:45:55.503Z · LW(p) · GW(p)
The very issue in question here is what this set of tools tells us about the track record of the machine. It could be uninformative because there are lots of other things that come from the machine that we are ignoring. Or it could be uninformative because they didn't actually come from the machine, and the link between them was constructed post-hoc.
comment by Donald Hobson (donald-hobson) · 2020-05-03T22:06:58.945Z · LW(p) · GW(p)
The Carnot engine is an abstract description of a maximally efficient heat engine. You can't make your car engine more efficient by writing thermodynamic equations on the engine casing.
The Solomonoff Inductor is an abstract description of an optimal reasoner. Memorizing the equations doesn't automagically make your reasoning better. The human brain is a kludge of non modifiable special purpose hardware. There is no clear line between entering data and changing software. Humans are capable of taking in a "rule of thumb" and making somewhat better decisions based on it. Humans can take in Occams razor, the advice to "prefer simple and mathematical hypothesis" and intermittently act on it, sometimes down-weighting a complex hypothesis when a simpler one comes to mind. Humans can sometimes produce these sort of rules of thumb from an understanding of Solomonoff Induction.
Its like looking at a book about optics doesn't automatically make your eyes better, but if you know the optics, you can sometimes work out how your vision is distorted and say "that line looks bent, but its actually straight".
If you want to try making workarounds and patches for the bug riddled mess of human cognition, knowing Solomonoff Induction is somewhat useful as a target and source of inspiration.
If you found an infinitely fast computer, Solomonoff Induction would be incredibly effective, more effective than any other algorithm.
I would expect any good AI design to tend to Solomonoff Induction (or something like it ? ) in the limit of infinite compute (and the assumption that acausal effects don't exist?) I would expect a good AI designer to know about Solomonoff Induction, in much the same way I would expect a good car engine designer to know about the Carnot engine.
comment by Richard_Kennaway · 2020-04-30T15:46:13.112Z · LW(p) · GW(p)
There are two things that should be distinguished:
- Given one's current beliefs, how to update them given new evidence.
- How to get the machine started: how to get an initial set of beliefs on the basis of no evidence
The answer to the first is very simple: Bayes' rule. Even if you do not have numbers to plug in, there are nevertheless some principles following from Bayes' rule that can still be applied. For example, avoiding the conjunction fallacy, bottom-line reasoning, suggestive variable names, failure to entangle with reality, taking P and not-P both as evidence for a favoured idea, and so on. These are all written of in the Sequences.
Answering the second generally leads to speculation about Solomonoff priors, the maxent principle, improper priors, and so on, and I am not sure has contributed much. But the same is true of every other attempt to find an ultimate foundation for knowledge. A rock cannot discover Bayes' rule, and what is the tabula rasa of an agent but a rock?
Another direction which has so far led only to more philosophical floundering is trying to apply probabilistic reasoning at the meta-level: what is the probability that our methods of logical reasoning are sound? Whatever conclusions we come to about that, what is the probability that those conclusions are true? And so on. Nothing good has yet come of this. It is a standard observation that despite all the work that has been done on non-standard logic, at the meta-level everyone reverts to good old standard logic. The only innovation that is used at the meta-level, because it was there all along, is constructiveness. When a mathematician proves something, he actually exhibits a proof, not a non-constructive argument that there exists a proof. But Euclid did that already.
Replies from: Slider, TAG↑ comment by Slider · 2020-04-30T18:15:46.291Z · LW(p) · GW(p)
It's not trivially clear at all to me how bayes rule leads into such things as bottom-line reasoning avoidance. It seems plausible for me that for a lot of people there is enough handwaveing that the actual words put forward don't do the majority of the lifting. That is a person talking about epistemology refers to bayes rule and explains why habits like avoiding bottom-line reasoning are good but they don't materially need bayes rule for that. There might be belief in entailment rather than actual reproducable / objectively verifable entailment. If I wave a giant "2+2=4" flag while robbing a bunch of banks in one sense that fact has caused theft and in another it has not. Neither is is clear that anyone that robs banks must believe "2+2=4".
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2020-04-30T21:19:46.424Z · LW(p) · GW(p)
You can avoid these things while knowing nothing of the Way of Bayes, but the Way of Bayes shows the underlying structure of reality that unifies the reasons for all of them being faults of reasoning.
Replies from: Slider↑ comment by Slider · 2020-05-02T12:01:43.976Z · LW(p) · GW(p)
I am unsure whether the dogmatic tone is put forward in sarcasm or as just plain straight.
One could argue that God is a convenient and unified way about thinking what is moral. And it is quite common for prisoners to find great utility in faith. But beliefs with the structure of "godlessness is dangerous as then there would be no right and wrong." cloud thinking a lot and tie the beliefs to a specific ontology. Are beliefs like "it's only good in so far that it aligns with God" and "it's reasonable only so far as it aligns with bayes" meaningfully structurally different?
What if there is a deeper way of thinking why certain cognitve moves are good at even more unified view? What principles do we use to verify that the way of bayes checks out?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2020-05-02T15:09:27.197Z · LW(p) · GW(p)
I am being quite straight, although consciously adopting some of Eliezer's rhetorical style.
Are beliefs like "it's only good in so far that it aligns with God" and "it's reasonable only so far as it aligns with bayes" meaningfully structurally different?
They are meaningfully different. The Way of Bayes works; the Way of God does not. "Works" means "capable of leading one's beliefs to align more closely with reality." For more about this, it's all there in the Sequences.
Replies from: Slider↑ comment by Slider · 2020-05-02T15:47:49.192Z · LW(p) · GW(p)
The analog I was shooting for was "thing is X only in so far that it approximates Y" where in one case X=good, Y= God and another case X=reasonable and Y=Bayes. The case of X=reasonable and Y=God doesn't impact anything (althought I guess that the stance that there is a divine gatekeeper to truth isn't a totally alien one, but I was not referencing it here).
Part of the reason as far as I understood for the rhetorical style is to make the silly things jump out as silly to not vest too serious weight in it.
There is the addiotional issue that rationalist are not particularly winning so the case of "one is broken, one is legit" can be questioned. Because of the heavy redefinition or questinioning of definitions it can be hard to verify that epikunfukas succeed in a metric other than the one defined by their teacher. This despite one of the central points being the reliance on external measures for success. If you follow fervently the teacher that teaches that you should not follow your teacher blindly you are still fervently following. That you have a model that refers to itself as making two variables close to each other doesn't say whether it is a good model ("I am a true model" is not informative).
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2020-05-02T19:04:50.117Z · LW(p) · GW(p)
Part of the reason as far as I understood for the rhetorical style is to make the silly things jump out as silly to not vest too serious weight in it.
I can't speak for Eliezer, but my intention is to imply that this is actually as important as the linguistic devices say it is. There is no irony intended here, no buffer of plausible deniability against being thought to be serious.
I can't make any sense of your last paragraph, and the non-word "epikunfukas" is the least of it.
↑ comment by TAG · 2020-05-13T09:53:19.539Z · LW(p) · GW(p)
There are two things that should be distinguished:
Given one’s current beliefs, how to update them given new evidence.
How to get the machine started: how to get an initial set of beliefs on the basis of no evidence
There's a third thing: how do you realise that incremental updated are no longer working,and you need a revolutionary shift to another paradigm.
Replies from: Slider↑ comment by Slider · 2020-05-13T11:47:31.119Z · LW(p) · GW(p)
That depends on lot how narrowly or widely we interpret things. It could make a lot of sense that "updating correctly" also includes not updating into a eventual deadend and updating in a way that can faciliate paradigm shifts.
It might be worth noting that for some "updating" can refer to a very narrow process involving explicitly and conciously formulated numbers. but theorethical mullings over bayes rule can also include appriciation for the wiggleroom ie 99.999% vs 99.9% differing in the degree how much emphasis for totally different paradigms is given.
Replies from: TAG↑ comment by TAG · 2020-05-15T13:31:57.408Z · LW(p) · GW(p)
That depends on lot how narrowly or widely we interpret things
Indeed. Where Bayes is a taken loosely series of maxims, you can add some n advice about not flogging a dead horse. But if Bayes means a formal, mathematical method, there is nothing in it to tell you to stop incrementally updating the same framing of a problem,and nothing to help you come up with a bold new hypothesis.
Replies from: Slider↑ comment by Slider · 2020-05-15T14:20:42.137Z · LW(p) · GW(p)
Whether there is nothing when things are interpreted formally is a very hard claim to prove. If a framing of a problem dissatisfactory it can be incrementally updated away from too. If you have a problem and try to solve it with epicycles that there are model deficiencies that do weaken the reliability of the whole paradigm.
Or stated in another way every hypothesis always has a positive probability, we never stop considering any hypothesis so we are guaranteed to not "miss out" on any of them (0 doesn't occur as a probability and can never arise). Only in approximations do we have a cutout that sufficiently small probabilities are not given any thought.
There migth be a problem how to design the hypothesis space to be truly universally representative. In approximations we define hypotheses to be some pretty generic class of statements which has something parameter like and we try to learn what are the appropriate values of the parameters. But coming up with an even more generic class might allow for hypotheses which have a better structure that better fit. In the non-approximation case we don't get "imagination starved" because we don't specify what pool the hypotheses are drawn from.
For more concrete example, if have a set of points and ask what line best fits you are going to get a bad result if the data is in a shape of a parabola. You could catch parabolas if you asked "what polynomial best fits these points?". But trying to ask a question like "what is the best thought that explains these observations" is resistant to be made formal because "thought" is so nebolous. No amount of line fitting will suggest a parabola but some amount of polynomial fittings will suggest parabolas over lines.
Replies from: TAG↑ comment by TAG · 2020-05-15T17:33:56.448Z · LW(p) · GW(p)
Whether there is nothing when things are interpreted formally
I didn't say there is nothing when things are interpreted formally. I said the formalism of Bayesian probability does not include a formula for generating novel hypotheses, and that is easy to prove.
If a framing of a problem dissatisfactory it can be incrementally updated away from too.
Can it? That doesn't seem to be how things work in practice. There is a set of revolutions in science, and inasmuch as they are revolutions,they are not slow incremental changes.
Or stated in another way every hypothesis always has a positive probability, we never stop considering any hypothesis
We don't have every hypothesis pre existing in our heads. If you were some sort of ideal reasoner with an infinite memory , you could do things that way, but you're not. Cognitive limitations may well explain the existence of revolutionary paradigm shifts.
But trying to ask a question like “what is the best thought that explains these observations” is resistant to be made formal because “thought” is so nebolous
That what I was saying. You can't formalise hypothesis fornation, yet it is necessary. Therefore, formal Bayes is not the one epistemology to rule then all, because all formalisations have that shortcoming.
comment by Sublation · 2020-04-30T12:02:08.397Z · LW(p) · GW(p)
I enjoyed this post. I think the dialogue in particular nicely highlights how underdetermined the phrase 'becoming more Bayesian' is, and that we need more research on what optimal reasoning in more computationally realistic environments would look like.
However, I think there are other (not explicitly stated) ways I think Bayesianism is helpful for actual human reasoners. I'll list two:
- I think the ingredients you get from Bayes' theorem offer a helpful way of making more precise what updating should look like. Almost everyone will agree that we should take into account new evidence, but I think explicitly bearing in mind 'okay, what's the prior?', and 'how likely is the evidence given the hypothesis?', offers a helpful framework which allows us to update on new evidence in a way that's more likely to make us calibrated.
- Moreover, even thinking in terms of degrees of belief as subjective probabilities at all (and not just how to update them) is a pretty novel conceptual insight. I've spent plenty of time speaking to people with advanced degrees in philosophy, many of whom think by default in terms of disbelief/full belief, and don't have a conception of anything like the framework of subjective probabilities.
Perhaps you agree with what I said above. But I think such points are worth stating explicitly, given that I think they're pretty unfamiliar to most people, and constitute ways in which the Bayesian framework has generated novel insights about good epistemic behaviour.
Replies from: ricraz↑ comment by Richard_Ngo (ricraz) · 2020-04-30T13:26:34.884Z · LW(p) · GW(p)
Thanks for the comment :) I agree with your second point, and alluded to it by having Bob mention the importance of considering degrees of belief as probabilities. Perhaps I should make this point more strongly, but at the same time, it's definitely not something that's unique to bayesianism, and you could advocate for it on its own merits. I guess the historical context is relevant: were the original people who advocated for thinking of degrees of belief in probabilistic terms bayesians? I don't know enough to answer that question.
I disagree with your first point. I think the basic framework - what did I use to believe, what have I learned, how have I changed my mind - is exactly that: basic. It doesn't need to be justified in terms of Bayes, it was around far before that. Insofar as Bayes makes it more precise, it also makes it more inapplicable, because people adopt the parts of "precise Bayesian updates" that they agree with, but not all the others which I mention in the dialogue.
Replies from: Sublation↑ comment by Sublation · 2020-04-30T14:33:51.769Z · LW(p) · GW(p)
Maybe the qualitative components of Bayes' theorem are, in some sense, pretty basic. If I think about how I would teach the basic qualitative concepts encoded by Bayes' theorem (which we both agree are useful), I can't think of a better way than through directly teaching Bayes' theorem. That is the sense in which I think Bayes' theorem offers a helpful precisification of these more qualitative concepts: it imposes a useful pedagogical structure into which we can neatly fit such principles.
You claim that the increased precision afforded by Bayesianism means that people end up ignoring the bits that don't apply to us, so Bayesianism doesn't really help us out much. I agree that, insofar as we use the formal Bayesian framework, we are ignoring certain bits. But I think that, by highlighting which bits do not apply to us, we gain a better understanding of why certain parts of our reasoning may be good or bad. For example, it forces us to confront why we think making predictions is good (as Bob points out, it allows us to avoid post-hoc rationalisation). This, I think, usefully steers our attention towards more pragmatic questions concerning the role that prediction plays in our epistemic lives, and away from more metaphysical questions about (for example) the real grounds for thinking prediction is an Epistemic Virtue.
So I think we might disagree on the empirical claim of how well we can teach such concepts without reliance on anything like Bayesianism. Perhaps we also have differing answers to the question: 'does engaging with the formal Bayesian framework usefully draw our attention towards parts of our epistemic lives that matter?' Does that sound right to you?
comment by Slider · 2020-04-30T11:28:35.432Z · LW(p) · GW(p)
When you have a simple description doing a lot of work and then add more description instead of piling on complexity it can be seen as making implicit claims more explicit. One is in effect "filling out the details".
Taking things to the extreme might pinpoint where the nebolous parts need further clarification. As laid out a bayesian deals in probabilities even in the limits which makes it kind of like emplying the most useful atomic heuristics. The lookup table can be thought as being the go-to answer for "has optimal action" and bayesian as "has integrated all information" with everything else set at a token level.
A more real thought process might have the property that there are information that is not integrated into the decision but it also seems that being blind to too much central information would be grounds to judge a thought process as deficient. The principle that allows to delineate some info as remote and some as central could point to a different maxim. It feels like the bayesian is more advanced as the lookup table despite the lookup table being more solid and unhesitant. So it would be interesting why it would be okay or move things forward to neglect remote information while it still impacts probabilities.
comment by Richard_Kennaway · 2020-04-30T16:23:01.668Z · LW(p) · GW(p)
There's a joke that theists like to tell. An atheist challenges God, saying, whatever you can do I can do also! God says, "I created Man from the dust." So the atheist says, "Ok," and gathers up some dust to work with. God replies, "No, no, first you must create your own dust."
Here's a rationalist version.
A rationalist challenges God, saying, whatever you can do I can do also! God says, "I made the dust evolve into thinking beings! You can have some dust this time." The rationalist says, "Ok," and devises a machine to start from first principles and which, set running in the universe, will update itself until it develops into an AGI. "No, no," says God, "first you must have the dust find those first principles."
↑ comment by Slider · 2020-04-30T18:34:18.857Z · LW(p) · GW(p)
I think the analog is more that upon delivering the program the limitation is that the program must be produces by a process separate from his evolutionary efforts ie using his own brain is cheating as it piggy bags on previous work.
The orignal joke could carry on that the atheist takes some hydrogen and atomically transmutes some dust. But then God can say "No, no you need to provide your own star". This pattern could contiue and the point coulud be that creating (assembly) is different from creating (producing). Alchemists could not make gold from lead via chemistry not because it is imposisble to start with lead and end up with gold but becuase definitionally it would step outside of chemistry.
The information side would have the analogous problem that starting from a low entropy state how can you ever produce anything with detail? It's one thing to burn up a negentropy source to make some structure but producing structure where there previously isn't anything is a different kind of problem.
comment by DanielFilan · 2020-05-02T02:48:56.908Z · LW(p) · GW(p)
I feel like a lot of Bob's responses are natural consequences of Eliezer's position that you describe as "strong bayesianism", except where he talks about what he actually recommends, and as such this post feels very uncompelling to me. Where they aren't, "strong bayesianism" is correct: it seems useful for someone to actually think about what the likelihood ratio of "a random thought popped into my head" is, and similarly about how likely skeptical hypotheses are.
Similarly,
In other words, an ideal bayesian is not thinking in any reasonable sense of the word - instead, it’s simulating every logically possible universe. By default, we should not expect to learn much about thinking based on analysing a different type of operation that just happens to look the same in the infinite limit.
seems like it just isn't an argument against
Whatever approximation you use, it works to the extent that it approximates the ideal Bayesian calculation - and fails to the extent that it departs.
(and also I dispute the gatekeeping around the term 'thinking': when I simulate future worlds, that sure feels like thinking to me! but this is less important)
In general, I feel like I must be missing some aspect of your world-view that underlies this, because I'm seeing almost no connection between your arguments and the thesis you're putting forwards.
Replies from: ricraz, DanielFilan↑ comment by Richard_Ngo (ricraz) · 2020-10-28T00:50:50.966Z · LW(p) · GW(p)
Just wanted to note that your point that I didn't properly rebut "Whatever approximation you use, it works to the extent that it approximates the ideal Bayesian calculation - and fails to the extent that it departs." was a good one, and it has nagged at me for a while. In general in this post I think he's implying that Bayesianism is not only correct in the limit, but also relevant to the way we actually do thinking. But I agree that interpreting this particular quote in that way is a bit of a stretch, so I've replaced it with "You may not be able to compute the optimal [Bayesian] answer. But whatever approximation you use, both its failures and successes will be explainable in terms of Bayesian probability theory." which more directly draws the link between methods we might actually use, and the ideal bayesian case.
↑ comment by DanielFilan · 2020-05-02T02:56:00.461Z · LW(p) · GW(p)
Also (crossposted to shortform [LW(p) · GW(p)]):
I think the use of dialogues to illustrate a point of view is overdone on LessWrong. Almost always, the 'Simplicio' character fails to accurately represent the smart version of the viewpoint he stands in for, because the author doesn't try sufficiently hard to pass the ITT of the view they're arguing against. As a result, not only is the dialogue unconvincing, it runs the risk of misleading readers about the actual content of a worldview. I think this is true to a greater extent than posts that just state a point of view and argue against it, because the dialogue format naively appears to actually represent a named representative of a point of view, and structurally discourages disclaimers of the type "as I understand it, defenders of proposition P might state X, but of course I could be wrong".
Replies from: ricraz↑ comment by Richard_Ngo (ricraz) · 2020-05-02T05:36:22.125Z · LW(p) · GW(p)
I'm a little confused by this one, because in your previous response you say that you think Bob accurately represents Eliezer's position, and now you seem to be complaining about the opposite?
Replies from: DanielFilan, DanielFilan↑ comment by DanielFilan · 2020-05-02T06:58:24.662Z · LW(p) · GW(p)
Actually, I think the synthesis is that many of the things that Bob is saying are implications of Eliezer's description and ways of getting close to Bayesian reasoning, but seem like they're almost presented as concessions. I could try to get into some responses chosen by you if that would be helpful.
↑ comment by DanielFilan · 2020-05-02T05:47:15.802Z · LW(p) · GW(p)
A lot of Bob's responses seem like natural consequences of Eliezer's claim, but some of them aren't.
comment by Kenny · 2020-04-30T18:48:22.123Z · LW(p) · GW(p)
Are you against Bayesianism or 'Bayesianism'?
I do agree that most things people identify as tenets of bayesianism [LW · GW] are useful for thinking about knowledge; but I claim that they would be just as useful, and better-justified, if we forced each one to stand or fall on its own.
This makes me think that you're (mostly) arguing against 'Bayesianism', i.e. effectively requesting that we 'taboo' that term and discuss its components ("tenets") separately.
One motivation for defending Bayesianism itself is that the relevant ideas ("tenets") are sufficiently entangled that they can or should be considered effectively inseparable.
I also have a sense that the particular means by which intelligent entities like ourselves can, incrementally, approach thinking like an 'idealized Bayesianism intelligence' is very different than what you sketched in your dialog. I think a part of that is something like maintaining a 'network' of priors and performing (approximate) Bayesian updates on specific 'modules' in that network and, more infrequently, propagating updates thru (some portion of) the network. Because of that, I didn't think this last part of the dialog was warranted:
A: So why do people advocate for the importance of bayesianism for thinking about complex issues if it only works in examples where all the variables are well-defined and have very simple relationships?
B: I think bayesianism has definitely made a substantial contribution to philosophy. It tells us what it even means to assign a probability to an event, and cuts through a lot of metaphysical bullshit.
In my own reasoning, and what I consider to be the best reasoning I've heard or read, about the COVID-19 pandemic, Bayesianism seems invaluable. And most of the value is in explicitly considering both evidence and the lack of evidence, how it should be interpreted based on (reasonably) explicit prior beliefs within some specific 'belief module', and what updates to other belief modules in the network are warranted. One could certainly do all of that without explicitly believing that Bayesianism is overall effective, but it also seems like a weird 'epistemological move' to make.
If you agree that most of the tenets of a big idea are useful (or true) in what important sense is it useful to say you're against the big idea? Certainly any individual tenet can be more or less useful or true, but in helping one stand or fall on its own, when are you sharpening the big idea versus tearing it down?
Replies from: ricraz↑ comment by Richard_Ngo (ricraz) · 2020-04-30T19:47:46.627Z · LW(p) · GW(p)
This makes me think that you're (mostly) arguing against 'Bayesianism', i.e. effectively requesting that we 'taboo' that term and discuss its components ("tenets") separately.
This is not an unreasonable criticism, but it feels slightly off. I am not arguing against having a bunch of components which we put together into a philosophy with a label; e.g. liberalism is a bunch of different components which get lumped together, and that's fine. I am arguing that the current way that the tenets of bayesianism are currently combined is bad, because there's this assumption that they are a natural cluster of ideas that can be derived from the mathematics of Bayes' rule. It's specifically discarding this assumption that I think is helpful. Then we could still endorse most of the same ideas as before, but add more in which didn't have any link to bayes' rule, and stop privileging bayesianism as a tool for thinking about AI. (We'd also want a new name for this cluster, I guess; perhaps reasonism? Sounds ugly now, but we'd get used to it).
comment by johnswentworth · 2020-04-30T18:25:35.584Z · LW(p) · GW(p)
By default, we should not expect to learn much about [X] based on analysing a different type of operation that just happens to look the same in the infinite limit.
Yeah, that's a good point. We should probably stop using linear approximations; we can't really expect to learn much about a function's behavior by analyzing a different function which just happens to look the same in the infinitesimal limit. And big-O analysis will have to go as well; we should not expect to learn much about an algorithm's runtime by comparing it to a different operation that just happens to look the same in the limit of large inputs. And while we're talking computer science, we should definitely stop pretending that floating-point operations tell us anything about operations with real numbers, just because they happen to agree in the limit of infinite precision. Thermodynamics can go; just because we know what happens in the limit of infinitely many microscopic particles doesn't mean we know anything useful about finite systems. For that matter, forget Bayesianism, why bother even with frequentist statistics? It's not like we'd expect to learn anything useful about a bunch of data samples just from some idealized distribution which happens to look similar in the infinite limit.
Replies from: zachary-robertson↑ comment by Past Account (zachary-robertson) · 2020-04-30T18:50:57.488Z · LW(p) · GW(p)
[Deleted]
Replies from: vlad_m, johnswentworth↑ comment by Vlad Mikulik (vlad_m) · 2021-01-12T22:19:00.601Z · LW(p) · GW(p)
You need much more than limiting behavior to say anything about whether or not the processes are ‘similar’ in a useful way before that.
Perhaps the synthesis here is that while looking at asymptotic behaviour of a simpler system can be supremely useful, we should be surprised that it works so well. To rely on this technique in a new domain we should, every time, demonstrate that it actually works in practice.
Also, it's interesting that many of these examples do have 'pathological cases' where the limit doesn't match practice. And this isn't necessarily restricted to toy domains or weird setups: for example, the most asymptotically efficient matrix multiplication algorithms are impractical (although in fairness that's the most compelling example on that page).
↑ comment by johnswentworth · 2020-04-30T19:35:49.288Z · LW(p) · GW(p)
Most of the arguments in the OP apply just as well to all those other limit use-cases as well.
Replies from: ricraz↑ comment by Richard_Ngo (ricraz) · 2020-04-30T20:04:21.727Z · LW(p) · GW(p)
In general I very much appreciate people reasoning from examples like these. The sarcasm does make me less motivated to engage with this thoroughly, though. Anyway, idk how to come up with general rules for which abstractions are useful and which aren't. Seems very hard. But when we have no abstractions which are empirically verified to work well in modelling a phenomenon (like intelligence), it's easy to overestimate how relevant our best mathematics is, because proofs are the only things that look like concrete progress.
On Big-O analysis in particular: this is a pretty interesting example actually, since I don't think it was obvious in advance that it'd work as well as it has (i.e. that the constants would be fairly unimportant in practice). Need to think more about this one.
Replies from: johnswentworth↑ comment by johnswentworth · 2020-04-30T21:09:20.665Z · LW(p) · GW(p)
But when we have no empirical verifications for abstractions which work well to model a phenomenon (like intelligence)...
We have tons of empirical evidence on this. We may not have a fully general model of intelligence yet, but we don't have a fully general model of physics yet either. We do know of some reasonably general approaches which work well to model intelligence in practice in a very wide variety of situations, and Bayesianism is one of those.
The lack of a fully-general model of all of physics is not a very good argument against quantum mechanics. Likewise, the lack of a fully-general model of all of intelligence is not a very good argument against Bayesianism. In particular, we expect new theories of physics to "add up to normality" - they should reduce to the old theory in places where the old theory worked. The same applies to models of intelligence: they should reduce to Bayesianism in places where Bayesianism works. Like quantum mechanics, that's an awful lot of places.
Replies from: ricraz↑ comment by Richard_Ngo (ricraz) · 2020-04-30T22:34:52.221Z · LW(p) · GW(p)
We have tons of empirical evidence on this.
What sort of evidence are you referring to; can you list a few examples?
Replies from: johnswentworth↑ comment by johnswentworth · 2020-04-30T23:09:04.503Z · LW(p) · GW(p)
All of the applications in which Bayesian statistics/ML methods work so well. All of the psychology/neuroscience research on human intelligence approximating Bayesianism. All the robotics/AI/control theory applications where Bayesian methods are used in practice.
Replies from: ricraz↑ comment by Richard_Ngo (ricraz) · 2020-04-30T23:30:31.496Z · LW(p) · GW(p)
All of the applications in which Bayesian statistics/ML methods work so well. All the robotics/AI/control theory applications where Bayesian methods are used in practice.
This does not really seem like much evidence to me, because for most of these cases non-bayesian methods work much better. I confess I personally am in the "throw a massive neural network at it" camp of machine learning; and certainly if something with so little theoretical validation works so well, it makes one question whether the sort of success you cite really tells us much about bayesianism in general.
All of the psychology/neuroscience research on human intelligence approximating Bayesianism.
I'm less familiar with this literature. Surely human intelligence *as a whole* is not a very good approximation to bayesianism (whatever that means). And it seems like most of the heuristics and biases literature is specifically about how we don't update very rationally. But at a lower level, I defer to your claim that modules in our brain approximate bayesianism.
Then I guess the question is how to interpret this. It certainly feels like a point in favour of some interpretation of bayesianism as a general framework. But insofar as you're thinking about an interpretation which is being supported by empirical evidence, it seems important for someone to formulate it in such a way that it could be falsified. I claim that the way bayesianism has been presented around here (as an ideal of rationality) is not a falsifiable framework, and so at the very least we need someone else to make the case for what they're standing for.
Replies from: johnswentworth↑ comment by johnswentworth · 2020-05-01T00:13:23.931Z · LW(p) · GW(p)
Problem is, "throw a massive neural network at it" fails completely for the vast majority of practical applications. We need astronomical amounts of data to make neural networks work. Try using them at a small company on a problem with a few thousand data points; it won't work.
I see the moral of that story as: if you have enough data, any stupid algorithm will work. It's when data is not superabundant that we need Bayesian methods, because nothing else reliably works. (BTW, this is something we could guess based on Bayesian foundations: Cox' Theorem or an information-theoretic foundation of Bayesian probability do not depend on infinite data for any particular problem, whereas things like frequentist statistics or brute-force neural nets do.)
(Side note for people confused about how that plays with the comment at the top of this thread: the relevant limit there was not the limit of infinite data, but the limit of reasoning over all possible models.)
I claim that the way bayesianism has been presented around here (as an ideal of rationality) is not a falsifiable framework, and so at the very least we need someone else to make the case for what they're standing for.
Around here, rationality is about winning. To the extent that we consider Bayesianism an ideal of rationality, that can be falsified by outperforming Bayesianism, in places where behavior of that ideal can be calculated or at least characterized enough to prove that something else outperforms the supposed ideal.
comment by lc · 2020-05-01T04:07:38.257Z · LW(p) · GW(p)
To be honest, I don't really know what "Bayesianism as a conceptual framework" is, or if it even exists at all. Bayes theorem is an equation, and Bayesianism to the rest of the world is a formalization of probability based on Bayes theorem. It's certainly not this comical strawman of a hypothetical general AI that doesn't understand that its prediction algorithms take time to run.
Replies from: ricraz↑ comment by Richard_Ngo (ricraz) · 2020-05-01T10:34:39.661Z · LW(p) · GW(p)
I agree that I am not critiquing "Bayesianism to the rest of the world", but rather a certain philosophical position that I see as common amongst people reading this site. For example, I interpret Eliezer as defending that position here (note that the first paragraph is sarcastic):
Clearly, then, a Carnot engine is a useless tool for building a real-world car. The second law of thermodynamics, obviously, is not applicable here. It's too hard to make an engine that obeys it, in the real world. Just ignore thermodynamics - use whatever works.
This is the sort of confusion that I think reigns over they who still cling to the Old Ways.
No, you can't always do the exact Bayesian calculation for a problem. Sometimes you must seek an approximation; often, indeed. This doesn't mean that probability theory has ceased to apply, any more than your inability to calculate the aerodynamics of a 747 on an atom-by-atom basis implies that the 747 is not made out of atoms. Whatever approximation you use, it works to the extent that it approximates the ideal Bayesian calculation - and fails to the extent that it departs.
Also, insofar as AIXI is a "hypothetical general AI that doesn't understand that its prediction algorithms take time to run", I think "strawman" is a little inaccurate.
Anyway, thanks for the comment. I've updated the first paragraph to make the scope of this essay clearer.
Replies from: lc↑ comment by lc · 2020-05-01T20:58:16.892Z · LW(p) · GW(p)
Somehow you and I are taking the exact opposite conclusions from that paragraph. He is explicitly noting that your interpretation is incorrect - that approximations are necessary because of time constraints. All he is saying is that underneath, the best possible action is determined by Bayesian arithmetic, even if it's better on net to choose the approximation because of compute constraints. Just because general relativity is more "true" than newtonian mechanics, doesn't mean that it's somehow optimal to use it to track the trajectory of mortar fire.
comment by Sublation · 2020-04-30T12:05:17.802Z · LW(p) · GW(p)
Could you say a bit more on why you think we should quantify the accuracy of credences with a strictly proper scoring rule, without reference to optimality proofs? I was personally confused about what principled reasons we had to think only strictly proper scoring rules were the only legitimate measures of accuracy, until I read Levinstein's paper offering a pragmatic vindication for such rules.
comment by TAG · 2020-05-04T15:50:52.783Z · LW(p) · GW(p)
B: I think bayesianism has definitely made a substantial contribution to philosophy. It tells us what it even means to assign a probability to an event, and cuts through a lot of metaphysical bullshit
It formalises the idea of subjective probability whilst saying about the nature or existence of objective probability.
comment by JesseClifton · 2020-05-03T17:47:10.598Z · LW(p) · GW(p)
I agree with the rejection of strong Bayesianism. I don’t think it follows from what you’ve written, though, that “bayesianism is not very useful as a conceptual framework for thinking either about AGI or human reasoning”.
I'm probably just echoing things that have been said many times before, but:
You seem to set up a dichotomy between two uses of Bayesianism: modeling agents as doing something like "approximate Solomonoff induction", and Bayesianism as just another tool in our statistical toolkit. But there is a third use of Bayesianism, the way that sophisticated economists and political scientists use it: as a useful fiction for modeling agents who try to make good decisions in light of their beliefs and preferences. I’d guess that this is useful for AI, too. These will be really complicated systems and we don’t know much about their details yet, but it will plausibly be reasonable to model them as “trying to make good decisions in light of their beliefs and preferences”. In turn, the Bayesian framework plausibly allows us to see failure modes that are common to many boundedly rational agents.
Perhaps a fourth use is that we might actively want to try to make our systems more like Bayesian reasoners, at least in some cases. For instance, I mostly think about failure modes in multi-agent systems. I want AIs to compromise with each other instead of fighting. I’d feel much more optimistic about this if the AIs could say “these are our preferences encoded as utility functions, these are our beliefs encoded as priors, so here is the optimal bargain for us given some formal notion of fairness” --- rather than hoping that compromise is a robust emergent property of their training.
Replies from: ricraz↑ comment by Richard_Ngo (ricraz) · 2020-05-03T18:28:50.222Z · LW(p) · GW(p)
There is a third use of Bayesianism, the way that sophisticated economists and political scientists use it: as a useful fiction for modeling agents who try to make good decisions in light of their beliefs and preferences. I’d guess that this is useful for AI, too. These will be really complicated systems and we don’t know much about their details yet, but it will plausibly be reasonable to model them as “trying to make good decisions in light of their beliefs and preferences”.
Perhaps a fourth use is that we might actively want to try to make our systems more like Bayesian reasoners, at least in some cases.
My post was intended to critique these positions too. In particular, the responses I'd give are that:
- There are many ways to model agents as “trying to make good decisions in light of their beliefs and preferences”. I expect bayesian ideas to be useful for very simple models, where you can define a set of states to have priors and preferences over. For more complex and interesting models, I think most of the work is done by considering the cognition the agents are doing, and I don't think bayesianism gives you particular insight into that for the same reasons I don't think it gives you particular insight into human cognition.
- In response to "The Bayesian framework plausibly allows us to see failure modes that are common to many boundedly rational agents": in general I believe that looking at things from a wide range of perspectives allows you to identify more failure modes - for example, thinking of an agent as a chaotic system might inspire you to investigate adversarial examples. Nevertheless, apart from this sort of inspiration, I think that the bayesian framework is probably harmful when applied to complex systems because it pushes people into using misleading concepts like "boundedly rational" (compare your claim with the claim that a model in which all animals are infinitely large helps us identify properties that are common to "boundedly sized" animals).
- "We might actively want to try to make our systems more like Bayesian reasoners": I expect this not to be a particularly useful approach, insofar as bayesian reasoners don't do "reasoning". If we have no good reason to think that explicit utility functions are something that is feasible in practical AGI, except that it's what ideal bayesian reasoners do, then I want to discourage people from spending their time on that instead of something else.
↑ comment by JesseClifton · 2020-05-03T19:31:02.376Z · LW(p) · GW(p)
I don't think bayesianism gives you particular insight into that for the same reasons I don't think it gives you particular insight into human cognition
In the areas I focus on, at least, I wouldn’t know where to start if I couldn’t model agents using Bayesian tools. Game-theoretic concepts like social dilemma, equilibrium selection, costly signaling, and so on seem indispensable, and you can’t state these crisply without a formal model of preferences and beliefs. You might disagree that these are useful concepts, but at this point I feel like the argument has to take place at the level of individual applications of Bayesian modeling, rather than a wholesale judgement about Bayesianism.
misleading concepts like "boundedly rational" (compare your claim with the claim that a model in which all animals are infinitely large helps us identify properties that are common to "boundedly sized" animals)
I’m not saying that the idealized model helps us identify properties common to more realistic agents just because it's idealized. I agree that many idealized models may be useless for their intended purpose. I’m saying that, as it happens, whenever I think of various agentlike systems it strikes me as useful to model those systems in a Bayesian way when reasoning about some of their aspects --- even though the details of their architectures may differ a lot.
I didn’t quite understand why you said “boundedly rational” is a misleading concept, I’d be interested to see you elaborate.
if we have no good reason to think that explicit utility functions are something that is feasible in practical AGI
I’m not saying that we should try to design agents who are literally doing expected utility calculations over some giant space of models all the time. My suggestion was that it might be good --- for the purpose of attempting to guarantee safe behavior --- to design agents which in limited circumstances make decisions by explicitly distilling their preferences and beliefs into utilities and probabilities. It's not obvious to me that this is intractable. Anyway, I don't think this point is central to the disagreement.
Replies from: ricraz↑ comment by Richard_Ngo (ricraz) · 2020-05-03T19:47:13.865Z · LW(p) · GW(p)
Game-theoretic concepts like social dilemma, equilibrium selection, costly signaling, and so on seem indispensable
I agree with this. I think I disagree that "stating them crisply" is indispensable.
I wouldn’t know where to start if I couldn’t model agents using Bayesian tools.
To be a little contrarian, I want to note that this phrasing has a certain parallel with the streetlight effect: you wouldn't know how to look for your keys if you didn't have the light from the streetlamp. In particular, this is also what someone would say if we currently had no good methods for modelling agents, but bayesian tools were the ones which seemed good.
Anyway, I'd be interested in having a higher-bandwidth conversation with you about this topic. I'll get in touch :)
comment by mako yass (MakoYass) · 2020-05-01T11:08:25.374Z · LW(p) · GW(p)
I've been looking for a word for the community of people who are good at identifying precise, robust, extremely clarifying conceptual frameworks. It seems like a very tight cluster that will grow increasingly defined and self-actualised. "Bayesian" seemed like the best fit for a name. Would you object to that?
Replies from: TAG↑ comment by TAG · 2020-05-01T11:48:59.868Z · LW(p) · GW(p)
Depends how badly you want to maintain that you are engaged in something other than philosophy.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2020-05-02T02:01:03.824Z · LW(p) · GW(p)
Not at all? It is metaphysics.