Erratum for "From AI to Zombies"
post by Valentin2026 (Just Learning) · 2021-08-12T04:39:05.779Z · LW · GW · 21 commentsThis is a question post.
Contents
Answers 2 TAG None 21 comments
Is there any erratum for the "From AI to Zombies" book? There are so many essays, and many of them are written more than 10 years ago. It seems very likely that since then the errors or imprecisions were discovered.
To be specific, I have a very low belief in the chapter on quantum mechanics. It discusses just Copenhagen and many-worlds interpretation, ignoring all others that were already existing at this moment. What about Quantum Bayesianism, for example?
And two more personal accounts:
1) A couple of years ago I was at the conference, where few talks discussed this quite famous gedankenexperiment. When they were looking at different interpretations, according to them the many-world one was quite vaguely formulated, not allowing to make a definitive answer in the problem they considered.
2) About five years ago I talked with my friend about "From AI to Zombies". He was studying interpretations of quantum mechanics way more than I did. He said something like "Great book but the quantum mechanics chapter is over-simplified".
Answers
The book is derived from a series of postings, and the postings have comments, and the comments point out errors. But that doesn't add up to a set of officially accepted errata.
21 comments
Comments sorted by top scores.
comment by gilch · 2021-08-12T07:27:52.862Z · LW(p) · GW(p)
A lot of the findings of the "soft" sciences, including psychology, didn't survive the replication crisis. There should be material for errata about that part by now.
I found much of the quantum physics sequence confusing, but Sean Carroll still makes a solid case for MWI.
comment by Shmi (shminux) · 2021-08-12T07:05:49.031Z · LW(p) · GW(p)
The book is great for improving one's thinking. My long-standing advice is to ignore anything in it with the word "quantum," it detracts from the book's message. If you want to learn physics, read a physics book. For a good review of that link in Nature, see Scott Aaronson's post https://www.scottaaronson.com/blog/?p=3975, and he also has a review of interpretations in https://www.scottaaronson.com/blog/?p=3628
Replies from: philh↑ comment by philh · 2021-08-15T20:32:43.176Z · LW(p) · GW(p)
Seems important to note here that the point of the quantum physics sequence, in context, is not to teach you physics. So reading a physics book doesn't give you what the sequence is intended to, and in particular it doesn't take the place of errata for that sequence.
At least, it's been a long time since I've read it, but I'm pretty confident the point isn't to teach you physics. I'm less confident of this, but to my memory the point is something like, "here's a worked example of a place where rationality can guide you to the truth faster than the scientific establishment can".
If Eliezer was wrong here, and rationality didn't help guide him to the truth, I think that would be actually really important. But I also think that having better interpretations of QM then MWI ten years later isn't obviously a failing, since - again, to my memory - he didn't say that MWI was obviously correct but that it was obviously the best we had, or maybe even just obviously better than Copenhagen.
Replies from: shminux, TAG, Just Learning↑ comment by Shmi (shminux) · 2021-08-15T23:00:57.676Z · LW(p) · GW(p)
I agree that the point was not to teach you physics. It was a tool to teach you rationality. Personally, I think it failed at that, and instead created a local lore guided by the teacher's password, "MWI is obviously right". And yes, I think he said nearly as much on multiple occasions. This post https://www.lesswrong.com/posts/8njamAu4vgJYxbJzN/bloggingheads-yudkowsky-and-aaronson-talk-about-ai-and-many [LW · GW] links a video of him saying as much: https://bloggingheads.tv/videos/2220?in=29:28
Note that Aaronson's position is much weaker, more like "if you were to extrapolate micro to macro assuming nothing new happens...", see, for example https://www.scottaaronson.com/blog/?p=1103
we do more-or-less know what could be discovered that would make it reasonable to privilege “our” world over the other MWI branches. Namely, any kind of “dynamical collapse” process, any source of fundamentally-irreversible decoherence between the microscopic realm and that of experience, any physical account of the origin of the Born rule, would do the trick.
and
Replies from: philhAdmittedly, like most quantum folks, I used to dismiss the notion of “dynamical collapse” as so contrived and ugly as not to be worth bothering with. But while I remain unimpressed by the specific models on the table (like the GRW theory), I’m now agnostic about the possibility itself. Yes, the linearity of quantum mechanics does indeed seem incredibly hard to tinker with. But as Roger Penrose never tires of pointing out, there’s at least one phenomenon—gravity—that we understand how to combine with quantum-mechanical linearity only in various special cases (like 2+1 dimensions, or supersymmetric anti-deSitter space), and whose reconciliation with quantum mechanics seems to raise fundamental problems (i.e., what does it even mean to have a superposition over different causal structures, with different Hilbert spaces potentially associated to them?).
↑ comment by philh · 2021-08-17T09:36:58.234Z · LW(p) · GW(p)
It was a tool to teach you rationality. Personally, I think it failed at that, and instead created a local lore guided by the teacher’s password, “MWI is obviously right”.
This could well be the case, I have no particular opinion on it.
And yes, I think he said nearly as much on multiple occasions.
(To clarify, I take it "as much" here means "MWI is obviously right", not "the sequence failed at teaching rationality".)
So the distinction I've been making in my head is between a specific interpretation called MWI, and multi-world interpretations in general. That is, I've been thinking there are other interpretations that we don't call MWI, but which share the property of, something like, "if it looks to you like your observations are collapsing quantum superposition, that's just what happens when you yourself enter superposition".
My (again, vague) understanding is that Eliezer thinks "some interpretation with that property" is obviously correct, but not necessarily the specific interpretation we might call MWI. But if I'm wrong about what MWI means, and it just refers to all interpretations with that property (or there is/can be only one such interpretation), then Eliezer certainly thinks "this is obviously by far the best hypothesis we have" and I agree that it sounds like he also thinks "this is obviously correct". And it seems like Scott is using it in the latter sense in that blog post, at least.
(And, yeah, I find Eliezer pretty convincing here, though I'm not currently capable of evaluating most of the technical arguments. My read is that Scott's weaker position seems to be something like, "okay but we haven't looked everywhere, there are possibilities we have no particular reason to expect to happen but that we can't experimentally rule out yet".)
↑ comment by TAG · 2021-09-05T15:18:04.213Z · LW(p) · GW(p)
At least, it’s been a long time since I’ve read it, but I’m pretty confident the point isn’t to teach you physics. I’m less confident of this, but to my memory the point is something like, “here’s a worked example of a place where rationality can guide you to the truth faster than the scientific establishment can”.
It was supposed to build up to the modest proposal that science is wrong and should be replaced with Bayes [LW · GW]. Of course, that was quietly forgotten about, which indicates how "errata" work around here -- not by putting expect explicit labels on things, but by implicit shared understanding.
Replies from: philh↑ comment by philh · 2021-09-07T14:28:57.953Z · LW(p) · GW(p)
"science is wrong and should be replaced with Bayes" isn't quite how I'd describe the thesis of that post. In particular, that makes it sound like Eliezer wants science as an institution to be replaced with something like "just teach people Bayes", and I don't think he suggests that. If anything, the opposite. ("Do you want Aumann thinking that once you’ve got Solomonoff induction, you can forget about the experimental method? Do you think that’s going to help him? And most scientists out there will not rise to the level of Robert Aumann.")
Rather, I think Eliezer wants individuals to be willing to say "if science the institution tells me one thing, and Bayes tells me another, I will trust in Bayes". Is that roughly what you meant?
And I don't think that's been quietly forgotten about, and I'm surprised if you think it has. Like, I feel like there's been an awful lot of Covid content here which is "science the institution is saying this thing, and here's what we should actually believe".
Replies from: TAG↑ comment by TAG · 2021-09-09T14:30:05.880Z · LW(p) · GW(p)
If Bayes does not give you, as an individual, better answers than science , there is no point in using it to override science.
If some Bayesian approach -- there's a lot of inconsistency about what "Bayes" means -- is systematically better than conventional science, the world of professional science should adopt it. It would be inefficient not to.
Replies from: philh↑ comment by philh · 2021-09-09T18:21:38.484Z · LW(p) · GW(p)
I think this is a false dichotomy; and also I do not know how it intends to engage with my comment.
Edit: okay, to elaborate briefly. I read my comment as: I think Eliezer was not saying X. I think Eliezer was saying Y. I think Y is still active here, and if you think otherwise I'm surprised.
I read your comment as: X is true iff Y is true. I think you're wrong, but even if you're right... okay, so what?
Do you think Eliezer was saying both X and Y? Do you think both X and Y have been forgotten about? (Bear in mind that even if they imply each other, people might not realize this [LW · GW].) What do you make of my example of Covid content, which to me was evidence that Y is still active here?
Replies from: TAG, TAG, TAG↑ comment by TAG · 2021-09-09T19:00:59.195Z · LW(p) · GW(p)
If Y is supposed to be your third option of using Bayes if it suits you, then it is still active here, and is evidence of motte an Bayleyism about Bayes.
Replies from: philh↑ comment by philh · 2021-09-09T20:43:16.053Z · LW(p) · GW(p)
No, Y is not meant to be that, and also that option was not "use Bayes if it suits you" and can not be fairly summed up that way.
Rather, X was
Eliezer wants science as an institution to be replaced with something like “just teach people Bayes”
and Y was
Eliezer wants individuals to be willing to say “if science the institution tells me one thing, and Bayes tells me another, I will trust in Bayes”.
(It is perhaps unsurprising that the things I referred to in shorthand as X and Y, are things that I had written in an earlier comment.)
Allow me to unpack in light of this.
I said previously [LW(p) · GW(p)] that I do not think Eliezer was saying "Bayes should replace science as an institution". Rather, I think Eliezer was saying that individuals should be willing to trust in Bayes over science. And I think LessWrong-at-large currently thinks that individuals should be willing to trust in Bayes over scince.
Thus, when you say "that was quietly forgotten about" - "that" being "the modest proposal that science is wrong and should be replaced with Bayes" - I think you're mistaken. I think that, depending how I interpret your comment: either you think something was said that was not said, or you think something has been forgotten that has not been forgotten.
Your reply [LW(p) · GW(p)] to this suggested that Bayes should replace science iff individuals should be willing to trust in Bayes over science.
I think this is wrong. But even if it's right, I do not think your reply engaged with the comment it was replying to.
Do you think Eliezer was saying "Bayes should replace science as an institution"? Do you think Eliezer was saying "individuals should be willing to trust in Bayes over science"? Do you think LessWrong-at-large currently thinks "Bayes should replace science as an institution"? Do you think LessWrong-at-large currently thinks "individuals should be willing to trust in Bayes over science"?
Your "quietly forgotten about" comment suggests that you think Eliezer was saying something that LessWrong-at-large currently does not think. But I do not know what you think that might be.
I'm going to limit myself to two more comments in this thread. I am not optimistic that they will be productive.
Replies from: TAG↑ comment by TAG · 2021-09-11T19:46:25.617Z · LW(p) · GW(p)
(It is perhaps unsurprising that the things I referred to in shorthand as X and Y, are things that I had written in an earlier comment.)
It perhaps unhelpful that you never said which if your previous comments they referred to .
Your reply to this suggested that Bayes should replace science iff individuals should be willing to trust in Bayes over science.
I suggested Bayes should replace science if it is objectively, systemstically better. In other words, Bayes replacing science is somethung EY should have said , because it follows from the other claim.
But I can't get "you" to make a clear statement that "individuals should use Bayes" means "Bayes is systematically better".
Instead, you said
For example, that Bayes might give some people better answers than science, and not give other people better answers than science?
If Bayes is better without being systematically better,if it only works for some people, then you shouldn't replace science with it. But what does that even mean? Why would it only work for some people? How are you testing rhat?
And where the hell did Yudkowsky say anything of the kind?
I'm not the only person who ever thought EY meant to replace science with Bayes ( and it's a reasonable conclusion if you believe that Bayes is systematically better for individuals)
For instance see this...please
I can't be completely sure that's what he meant because he is such an unclear writer ... but you can't be completely sure of your interpretation either, for the same reason.
Replies from: philh↑ comment by philh · 2021-09-12T21:50:35.723Z · LW(p) · GW(p)
It perhaps unhelpful that you never said which if your previous comments they referred to .
I'm afraid I don't think this was very ambiguous. Like, the bit where I introduced X and Y read
I read my comment as: I think Eliezer was not saying X. I think Eliezer was saying Y. I think Y is still active here, and if you think otherwise I’m surprised.
At the time of writing, I had exactly one comment in this thread which remotely fit that schema. And, the start of that comment was
I think this is a false dichotomy; and also I do not know how it intends to engage with my comment.
where I think "this" is, in context, fairly obviously referring to the comment it replies to; and so "my comment" is fairly obviously the comment that was replying to? And then when I refer to "my comment" in the next paragraph, the obvious conclusion is that it's the same comment, which again is the only one which remotely fit the schema outlined in that paragraph.
I predict that at least 90% of LW commenters would have parsed this correctly.
(Is this relevant? I think it must be, because a lot of this discussion is about "what did person mean when they said thing?" And, there's no polite way to say this, but if you can't get this right... when you accuse Eliezer of being an unclear writer, I cannot help but consider that maybe instead you're a bad reader, and I suggest that you consider this possibility too. Of course, if I'm wrong and what I said wasn't clear, I have to downweight my trust in my own reading comprehension.)
I suggested Bayes should replace science if it is objectively, systemstically better. In other words, Bayes replacing science is somethung EY should have said , because it follows from the other claim.
But do you think he actually said it? I reminded you, earlier, of the Sally-Anne fallacy [LW · GW], the failure to distinguish between "this person said a thing" and "this person said something that implies a thing", and I feel I must remind you again. Because if the thing you think LW has "quitely forgotten about" is something that Eliezer didn't say, but that you think follows from something Eliezer said, that is a very different accusation!
It might be that LW and/or Eliezer don't realize the thing follows from what Eliezer said, and this would reflect badly on LW and/or Eliezer but it wouldn't say much about how errata work around here.
Or, of course, it might be that you are wrong, and the thing doesn't follow from what Eliezer said.
But I can’t get “you” to make a clear statement that “individuals should use Bayes” means “Bayes is systematically better”.
I mean, I think individuals should use Bayes. Whether Bayes is "systematically better" than science is, I think, a meaningless question without specifying what it's supposed to be better at. And even if we specify that, I don't see that the first thing would mean the second thing. So I'm not sure what clear statement you expect me to make...
...and I don't know why you'd care whether or not I make it? My own personal opinions here seem basically irrelevant. You accused LW-at-large of quietly forgetting something that Eliezer said. Whether that accusation holds or not has little to do with whether I personally agree with the thing.
If Bayes is better without being systematically better,if it only works for some people, then you shouldn’t replace science with it. But what does that even mean? Why would it only work for some people?
Ugh, fine. I've been trying to avoid this but here goes.
So first off I don't think I know what you mean by "systematically". Eliezer doesn't use the word. It seems clear, at least, that he's dubious "teach more Bayes to Robert Aumann" would cause Robert Aumann to have more correct beliefs. So, maybe Eliezer doesn't even think Bayes is systematically better in the sense that you mean? Again, I don't know what that sense is, so I don't know. But putting that aside...
One reason it might only work for some people is because some people are less intelligent than others? Like, if I tell you you're going to need to solve a hedge maze and you'll be judged on time but you can see the layout before you enter, then "learn the fastest route before you enter" is systematically better than "take the left path at every fork", in the sense that you'll get through the maze faster - if you're capable of memorizing the fastest route, and keeping track of where you are. If you're not capable of that, I'd advise you to stick to the left path strategy.
I'm not saying this is what's going on, just... it seems like an obvious sort of thing to consider, and I find it bizarre that you haven't considered it.
Another thing here is: what works/is optimal for a person might not work/be optimal for a group? One person making paperclips will do a bunch of different things, two people making paperclips together might only do half of those things each, but also some extra things because of coordination overhead and perhaps differing incentives.
And then it might not even be meaningful to talk about a group in the same way as an individual. An agent might assign probability 0.6 to a hypothesis; another agent might assign probability 0.2; what probability does "the group consisting of these two agents" assign? If each agent does a Bayesian update upon observing evidence, does the group also do a Bayesian update?
All of which is to say, I am baffled by your insistence that if Bayes is better than science for individuals, we should replace science-the-institution with Bayes. This seems unjustified on many levels.
And where the hell did Yudkowsky say anything of the kind?
It sounds to me like:
- You said "X iff Y".
- I said, "I don't think so, here's one reason you might have one but not the other." (I've now given a much more detailed explanation of why I think you're mistaken.)
- You're asking where EY said anything like what I said.
This seems confused, because I never said that EY said anything like what I said. I don't think there's any particular reason to expect him to have done. He cannot explicitly reject every possible mistake someone might make while reading his essays.
I’m not the only person who ever thought EY meant to replace science with Bayes [...] For instance see this...please
Okay, so another person misinterpreted him in a similar way. I'm not sure what I'm supposed to make of this. Even if EY was unclear, that's also a different criticism than the idea that LW has quietly forgotten things.
you can’t be completely sure of your interpretation either, for the same reason.
Maybe not, but, like... you brought it up? If you think you know what he meant, stand by it and defend your interpretation. If you don't think you know what he meant, admit that outright. If you don't think you know what he meant, but you think I don't know either... so what? Does me being also wrong vindicate you somehow? Feels like a prosecutor being asked "do you have any reason to think the defendant was near the scene of the crime that night" and replying "okay, maybe not, but you don't know where he was either".
I note that you have once again declined to answer my direct questions, so I'll try to fill in what I think you think.
Do you think Eliezer was saying “Bayes should replace science as an institution”?
You apparently don't think he said this? (At least I don't think you've justified the idea that he has. Nor have you addressed the bit I quoted above about Robert Aumann, where I think he suggests the opposite of this.) You just think it follows from something he did say. I've now explained in some detail both why I think you're wrong about that, and why even if you were right, it would be important to distinguish from him actually saying it.
Do you think Eliezer was saying “individuals should be willing to trust in Bayes over science”?
Dunno if you think this, probably not super relevant.
Do you think LessWrong-at-large currently thinks “Bayes should replace science as an institution”?
I guess you think this is not the case, and this is what you think has been "quietly forgotten about". I agree LW-at-large does not currently think this, I just think EY never proposed it either.
Do you think LessWrong-at-large currently thinks “individuals should be willing to trust in Bayes over science”?
Dunno if you think this either, also probably not super relevant.
Replies from: TAG↑ comment by TAG · 2021-09-12T23:03:36.876Z · LW(p) · GW(p)
But do you think he actually said it?
I don't think he said it clearly, and I don't think he said anything else clearly. Believe it or not, what I am doing is charitable interpretation...I am trying to make sense of what he said. If he thinks Bayes is systematically better than science, that would imply "Bayes is better than science, so replace science with Bayes", because that makes more sense than "Bayes is better than science, so don't replace Science with Bayes". So I think that is what he is probably saying.
he failure to distinguish between “this person said a thing” and “this person said something that implies a thing”,
Maybe it's Sally Anne fallacy, maybe its charitable interperetation. One should only use charitable interpretation where the meaning is unclear. Sally-Anne is only a fallacy where the meaning is clear.
If you think you know what he meant, stand by it and defend your interpretation. If you don’t think you know what he meant, admit that outright.
I am engaging in probablistic reasoning.
Okay, so another person misinterpreted him in a similar way.
Why should I make any attempt to provide evidence, when you are going to reject it out of hand.
He cannot explicitly reject every possible mistake someone might make while reading his essays.
No, but he could do a lot better. (An elephant-in-the-room issue here is that even though he is still alive, no-one expects him to pop up and say something that actually clarifies the issue).
So first off I don’t think I know what you mean by “systematically”.
It's about the most basic principle of epistemology, and one which the rationalsphere accepts: lucky guesses stopped clocks are not knowledge, even when they are right, because they are not reliable and systematic.
I think, a meaningless question without specifying what it’s supposed to be better at.
Obviously, that would be the stuff that science is already doing, since EY has argued, at immense length, that it gets quantum mechanics right,.
Eliezer doesn’t use the word. It seems clear, at least, that he’s dubious “teach more Bayes to Robert Aumann” would cause Robert Aumann to have more correct beliefs. So, maybe Eliezer doesn’t even think Bayes is systematically better in the sense that you mean?
If there is some objective factor about a person that makes them incapable of understanding Bayes , then a Bayesian should surely identify it. But where else has EY ever so much as hinted that some people are un-Bayesian?
Dunno if you think this either, also probably not super relevant.
Why do I have to tell you what I think in order for you to tell me what you think?
Here's the exchange:
Me: Do you think LessWrong-at-large currently thinks “individuals should be willing to trust in Bayes over science”?
You: Dunno if you think this either, also probably not super relevant.
Replies from: philh↑ comment by philh · 2021-09-13T10:12:03.670Z · LW(p) · GW(p)
Believe it or not, what I am doing is charitable interpretation...I am trying to make sense of what he said.
You may be trying to be charitable. You are not succeeding, partly because what you consider to be "making sense" does not make sense.
But also partly because you're routinely failing to acknowledge that you're putting our own spin on things. There is a big difference between "Eliezer said X" and "I don't know what Eliezer was trying to say, but my best guess is that he meant X".
After you say "Eliezer said X" and I say "I don't think Eliezer was trying to say X, I think he was trying to say Y", there's a big difference between "Y implies X" and "okay, I guess I don't really know what he was trying to say, but it seems to me that X follows from Y so my best guess is he meant X".
If this is your idea of charitable interpretation, I wish you would be less charitable.
If he thinks Bayes is systematically better than science, that would imply “Bayes is better than science, so replace science with Bayes”, because that makes more sense than “Bayes is better than science, so don’t replace Science with Bayes”.
I have explained why this is wrong.
Sally-Anne is only a fallacy where the meaning is clear.
This seems exactly wrong. Deciding "someone believes P, and P implies Q, so they must believe Q" is a fallacy because it is possible for someone to believe P, and for P to imply Q, and yet for the person not to believe Q. It's possible even if they additionally believe that P implies Q; people have been known to be inconsistent.
This inference may be correct, mind you, and certainly someone believing P (which implies Q) is reason to suspect that they believe Q. "Fallacies as weak Bayesian evidence" and so forth. But it's still a fallacy in the same way as "P implies Q; Q; therefore P". It is not a valid inference in general.
That's where the meaning is clear. Where the meaning is unclear... what you're doing instead is "someone kind of seems to believe P? I dunno though. And P implies Q. So they definitely believe Q". Which is quite clearly worse.
I am engaging in probablistic reasoning.
Are you, really? Can you point to anything you've said which supports this?
Like, skimming the conversation, as far as I can tell I have not once seen you express uncertainty in your conclusions. You have not once said anything along the lines of "I think Eliezer might have meant this, and if so then... but on the other hand he might have meant this other thing, in which case..."
You have, after much goading, admitted that you can't be sure you know what Eliezer meant. But I haven't seen you carry that uncertainty through to anything else.
I don't know what's going on in your head, but I would be surprised if "probabilistic reasoning" was a good description of the thing you're doing. From the outside, it looks like the thing you're doing might be better termed "making a guess, and then forgetting it was a guess".
Why should I make any attempt to provide evidence, when you are going to reject it out of hand.
I didn't reject the evidence? I agree that it is evidence someone else interpreted Eliezer in the same way you did, which as far as I can tell is what you were trying to show when you presented the evidence?
I still think it's a misinterpretation. This should not be a surprise. It's not like the other person gave me any more reason than you have, to think that Eliezer meant the thing you think he meant. Neither you nor he appears to have actually quoted Eliezer, for example, beyond his titles. (Whereas I have provided a quote which suggests your interpretation is wrong, and which you have all-but ignored.)
And I still don't know what I'm to make of it. I still don't know why you think "someone else also interpreted EY in this way" is particularly relevant.
No, but he could do a lot better. (An elephant-in-the-room issue here is that even though he is still alive, no-one expects him to pop up and say something that actually clarifies the issue).
Perhaps he could, but like... from my perspective you've made an insane leap of logic and you're expecting him to clear up that it's not what he meant. But there are an awful lot of possible insane leaps of logic people can make, and surely have made when reading this essay and others. Why would he spend his time clearing up yours specifically?
It’s about the most basic principle of epistemology, and one which the rationalsphere accepts: lucky guesses stopped clocks are not knowledge, even when they are right, because they are not reliable and systematic.
I still don't know what you mean by "systematically". If you expected this example to help, I don't know why you expected that.
Obviously, that would be the stuff that science is already doing, since EY has argued, at immense length, that it gets quantum mechanics right,.
What stuff specifically? Science is doing a lot of things. Is Bayes supposed to be better than science at producing jobs for grant writers?
And, I point out that this is you replying to what was mostly an aside, while ignoring the bits that seemed to me more important. You've ignored where I said "even if we specify that, I don’t see...". You've ignored where I said "I'm not sure what clear statement you expect me to make". You've ignored where I said "I don't know why you'd care whether or not I make it".
If there is some objective factor about a person that makes them incapable of understanding Bayes , then a Bayesian should surely identify it. But where else has EY ever so much as hinted that some people are un-Bayesian?
I don't know why you're asking the question, but I'm pretty sure it rests on a confused premise in ways that I've explained, so I'm not going to try to figure this out.
Why do I have to tell you what I think in order for you to tell me what you think?
I don't think I've been coy about what I think? I don't know what you're getting at here. My best guess is that you wanted me to explain why I thought "individuals should be willing to trust in Bayes over science" does not imply "we should replace science-the-institution with Bayes", and you were unwilling to answer my questions until I answered this?
If that's what you wanted, then I'd point out firstly that I did give a brief explanation, "Bayes might give some people better answers than science, and not give other people better answers than science". Apparently my brief answer didn't satisfy you, but, well... why should I give a more in depth answer when you've been refusing to answer my questions? I'm not convinced there's any relationship between these things (whether or not I should answer your questions and whether or not you've answered mine; and whether or not you should answer my questions and whether or not I've answered yours). But if there is, I guess I think I feel like the ball's in your court and has been for a while.
But also, I repeatedly said that I thought it was besides the point. "I do not know how it intends to engage with my comment." "I think you’re wrong, but even if you’re right… okay, so what?" "I’m not inclined to engage on that further, because I don’t think it’s particularly relevant". "I think this is wrong. But even if it’s right, I do not think your reply engaged with the comment it was replying to."
If you thought it was not besides the point, you had ample opportunity to try to convince me? As far as I can tell you did not try.
And also, recall that you started this thread by leveling an accusation. I was asking my questions to try to get you to elaborate on what you had said, because I did not know what accusation you were making. If you refuse to explain the meaning of your terms, until the person asking for clarification answers other questions you ask of them...
...and if you do this while complaining about someone else not writing as clearly as you might hope, and not popping in to clarify his meanings...
then, honestly, I cannot help but wonder what you think is happening, here. It does not feel, for example, like your accusation was coming from a place of "here is a mistake LW is making, I will try to help LW see that they are making this mistake, and that will make LW a better place".
Here’s the exchange:
Me: Do you think LessWrong-at-large currently thinks “individuals should be willing to trust in Bayes over science”?
You: Dunno if you think this either, also probably not super relevant.
What? This exchange has not happened. I asked that question, and when you declined to answer, I wrote the second line too. I hope you already know this? (Honestly though, I'm genuinely not sure that you do.) But I have no idea what you're trying to say.
(Are you asking me whether I think LW-at-large currently thinks that? If so the answer is "yes, I already said that I think that, that's why I mentioned Covid content.")
I am now tapping out. I guess I might make low-effort comments going forwards, if I think that will serve my purposes. But I won't expend significant energy talking to you on this subject unless I somehow become convinced it will be worth my time. (I hope not to expend significant energy talking to you on any subject, in fact.)
I am strong-downvoting your original accusation. I was reluctant to do so at first, because I don't want LW to be an echo chamber; I wanted to check whether the accusation had merit. I am satisfied it has little-to-none.
↑ comment by TAG · 2021-09-09T18:34:54.424Z · LW(p) · GW(p)
Feel free to show how it is a false dichotomy by stating the third option.
Replies from: philh↑ comment by philh · 2021-09-09T18:39:06.922Z · LW(p) · GW(p)
For example, that Bayes might give some people better answers than science, and not give other people better answers than science?
But I'm not inclined to engage on that further, because I don't think it's particularly relevant; see the edit I made to my previous comment.
Replies from: TAG↑ comment by TAG · 2021-09-09T18:43:55.942Z · LW(p) · GW(p)
For example, that Bayes might give some people better answers than science, and not give other people better answers than science?
Why?
If thats systematic, then the people who don't get good answers from it are just applying it wrong...and I can repeat my previous comment with "correct Bayes" substituted for "Bayes".
And if it's random, Bayes isn't that much good.
↑ comment by Valentin2026 (Just Learning) · 2021-08-17T05:18:30.333Z · LW(p) · GW(p)
Are there any other examples when rationality guides you faster than the scientific approach? If so it would be good to collect and mention them. If no I am pretty suspicious about QM one as well.
Replies from: shminux↑ comment by Shmi (shminux) · 2021-08-18T00:30:40.796Z · LW(p) · GW(p)
I think that rationality as a competing approach to the scientific method is a particularly bad take that leads a lot of aspiring rationalists astray, into the cultish land of "I know more and better than experts in the field because I am a rationalist". Data analysis uses plenty of Bayesian reasoning. Scientists are humans and so are prone to the biases and bad decisions that instrumental rationality is supposed to help with. CFAR-taught skills are likely to be useful for scientists and non-scientists alike.