Posts

Suspiciously balanced evidence 2020-02-12T17:04:20.516Z
"Future of Go" summit with AlphaGo 2017-04-10T11:10:40.249Z
Buying happiness 2016-06-16T17:08:53.802Z
AlphaGo versus Lee Sedol 2016-03-09T12:22:53.237Z
[LINK] "The current state of machine intelligence" 2015-12-16T15:22:26.596Z
Scott Aaronson: Common knowledge and Aumann's agreement theorem 2015-08-17T08:41:45.179Z
Group Rationality Diary, March 22 to April 4 2015-03-23T12:17:27.193Z
Group Rationality Diary, March 1-21 2015-03-06T15:29:01.325Z
Open thread, September 15-21, 2014 2014-09-15T12:24:53.165Z
Proportional Giving 2014-03-02T21:09:07.597Z
A few remarks about mass-downvoting 2014-02-13T17:06:43.216Z
[Link] False memories of fabricated political events 2013-02-10T22:25:15.535Z
[LINK] Breaking the illusion of understanding 2012-10-26T23:09:25.790Z
The Problem of Thinking Too Much [LINK] 2012-04-27T14:31:26.552Z
General textbook comparison thread 2011-08-26T13:27:35.095Z
Harry Potter and the Methods of Rationality discussion thread, part 4 2010-10-07T21:12:58.038Z
The uniquely awful example of theism 2009-04-10T00:30:08.149Z
Voting etiquette 2009-04-05T14:28:31.031Z
Open Thread: April 2009 2009-04-03T13:57:49.099Z

Comments

Comment by gjm on Comments on Jacob Falkovich on loneliness · 2021-09-16T22:28:27.368Z · LW · GW

This is totally peripheral to all the actual points of your essay, but I'd just like to remark on the excellence of this little fragment that you quoted from Jacob:

My argument doesn’t hinge on specific data relating to the intimacy recession and whether the survey counting sex dolls adjusted for inflation.

Inflation! Ha.

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-16T22:25:44.160Z · LW · GW

If Loosemore's point is only that an AI wouldn't have separate semantics for those things, then I don't see how it can possibly lead to the conclusion that concerns about disastrously misaligned superintelligent AIs are absurd.

I do not think Yudkowsky's arguments assume that an AI would have a separate module in which its goals are hard-coded. Some of his specific intuition-pumping thought experiments are commonly phrased in ways that suggest that, but I don't think it's anything like an essential assumption in any case.

E.g., consider the "paperclip maximizer" scenario. You could tell that story in terms of a programmer who puts something like "double objective_function() { return count_paperclips(DESK_REGION); }" in their AI's code. But you could equally tell it in terms of someone who makes an AI that does what it's told, and whose creator says "Please arrange for there to be as many paperclips as possible on my desk three hours from now.".

(I am not claiming that any version of the "paperclip maximizer" scenario is very realistic. It's a nice simple example to suggest the kind of thing that could go wrong, that's all.)

Loosemore would say: this is a stupid scenario, because understanding human language in particular implies understanding that that isn't really a request to maximize paperclips at literally any cost, and an AI that lacks that degree of awareness won't be any good at navigating the world. I would say: that's a reasonable hope but I don't think we have anywhere near enough understanding of how AIs could possibly work to be confident of that; e.g., some humans are unusually bad at that sort of contextual subtlety, and some of those humans are none the less awfully good at making various kinds of things happen.

Loosemore claims that Yudkowsky-type nightmare scenarios are "logically incoherent at a fundamental level". If all that's actually true is that an AI triggering such a scenario would have to be somewhat oddly designed, or would have to have a rather different balance of mental capabilities than an average human being, then I think his claim is very very wrong.

Comment by gjm on I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead · 2021-09-16T09:58:34.364Z · LW · GW

I take it you didn't do any EY-specific training (because so far as I know that's not a thing you can do with the kinda-public GPT-3, and because I suspect it would need an annoyingly large amount of hardware to do effectively even if you could), and all the knowledge of Eliezer Yudkowsky that GPT-3 shows here is knowledge it just naturally has? ("Naturally", ha. But you know what I mean.)

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-15T23:37:55.915Z · LW · GW

He's asked a lot of questions. His various LW posts are sitting, right now, at scores of +4, +4, +2, +11, +9, +10, +4, +1, +7, -2. This one's slightly negative; none of the others are. It's not the case that this one got treated more harshly because it suggested that something fundamental to LW might be wrong; the same is true of others, including the one that's on +11.

This question (as well as some upvotes and slightly more downvotes) received two reasonably detailed answers, and a couple of comments (one of them giving good reason to doubt the premise of the question), all of them polite and respectful.

Unless your position is that nothing should ever be downvoted, I'm not sure what here qualifies as being "shot".

(I haven't downvoted this question nor any of Haziq's others; but my guess is that this one was downvoted because it's only a question worth asking if Halpern's counterexample to Cox's theorem is a serious problem, which johnswentworth already gave very good reasons for thinking it isn't in response to one of Haziq's other questions; so readers may reasonably wonder whether he's actually paying any attention to the answers his questions get. Haziq did engage with johnswentworth in that other question -- but from this question you'd never guess that any of that had happened.)

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-15T01:24:37.522Z · LW · GW

Yes, you gave me a "short list of key ideas". So all I have to do to find out what you're actually talking about is to go through everything anyone has ever written about those ideas, and find the bits that refute positions widely accepted on Less Wrong.

This is not actually helpful. Especially as nothing you've said so far gives me very much confidence that the examples you're talking about actually exist; one simple explanation for your refusal to provide concrete examples is that you don't actually have any.

I've put substantial time and effort into this discussion. It doesn't seem to me as if you have the slightest interest in doing likewise; you're just making accusation after accusation, consistently refusing to provide any details or evidence, completely ignoring anything I say unless it provides an opportunity for another cheap shot, moving the goalposts at every turn.

I don't know whether you're actually trolling, or what. But I am not interested in continuing this unless you provide some actual concrete examples to engage with. Do so, and I'll take a look. But if all you want to do is sneer and whine, I've had enough of playing along.

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-15T01:15:44.579Z · LW · GW

So did Eliezer Yudkowsky. What's your point?

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-14T23:32:00.352Z · LW · GW

Unfortunately some messengers are idiots (we have already established that most likely either Yudkowsky or Loosemore is an idiot, in this particular scenario). Saying that someone is an idiot isn't shooting the messenger in any culpable sense if in fact they are an idiot, nor if the person making the accusation has reasonable grounds for thinking they are.

So I guess maybe we actually have to look at the substance of Loosemore's argument with Yudkowsky. So far as I can make out, it goes like this.

  • Yudkowsky says: superintelligent AI could well be dangerous, because despite our efforts to arrange for it to do things that suit us (e.g., trying to program it to do things that make us happy) a superintelligent AI might decide to do things that in fact are very bad for us, and if it's superintelligent then it might well also be super-powerful (on account of being super-persuasive, or super-good at acquiring money via the stock market, or super-good at understanding physics better, or etc.).
  • Loosemore says: this is ridiculous, because if an AI were really superintelligent in any useful sense then it would be smart enough to see that (e.g.) wireheading all the humans isn't really what we wanted; if it isn't smart enough to understand that then it isn't smart enough to (e.g.) pass the Turing test, to convince us that it's smart, or to be an actual threat; for that matter, the researchers working on it would have turned it off long before, because its behaviour would necessarily have been bizarrely erratic in other domains besides human values.

The usual response to this by LW-ish people is along the lines of "you're assuming that a hypothetical AI, on finding an inconsistency between its actual values and the high-level description of 'doing things that suit its human creators', would realise that its actual values are crazy and adjust them to match that high-level description better; but that is no more inevitable than that humans, on finding inconsistencies between our actual values and the high-level description of 'doing things that lead us to have more surviving descendants', would abandon our actual values in order to better serve the values of Evolution". To me this seems sufficient to establish that Loosemore has not shown that a hypothetical AI couldn't behave in clearly-intelligent ways that mostly work towards a given broad goal, but in some cases diverge greatly from it.

There's clearly more to be said here, but this comment is already rather long, so I'll skip straight to my conclusion: maybe there's some version of Loosemore's argument that's salvageable as an argument against Yudkowsky-type positions in general, but it's not clear to me that there is, and while I personally wouldn't have been nearly as rude as Yudkowsky was I think it's very much not clear that Yudkowsky was wrong. (With, again, the understanding that "idiot" here doesn't mean e.g. "person scoring very badly in IQ tests" but something like "person who obstinately fails to grasp a fundamental point of the topic under discussion".)

I don't think it's indefensible to say that Yudkowsky was shooting the messenger in this case. But, please note, your original comment was not about what Yudkowsky would do; it was about what the LW community in general would do. What did the LW community in general think about Yudkowsky's response to Loosemore? They downvoted it to hell, and several of them continued to discuss things with Loosemore.

One rather prominent LWer (Kaj Sotala, who I think is an admin or a moderator or something of the kind here) wrote a lengthy post in which he opined that Loosemore (in the same paper that was being discussed when Yudkowsky called Loosemore an idiot) had an important point. (I think, though, that he would agree with me that Loosemore has not demonstrated that Yudkowsky-type nightmare scenarios are anything like impossible, contra Loosemore's claim in that paper that "this entire class of doomsday scenarios is found to be logically incoherent at such a fundamental level that they can be dismissed", which I think is the key question here. Sotala does agree with Loosemore than some concrete doomsday scenarios are very implausible.) He made a linkpost for that here on LW. How did the community respond? Well, that post is at +23, and there are a bunch of comments discussing it in what seem to me like constructive terms.

So, I reiterate: it seems to me that you're making a large and unjustified leap from "Yudkowsky called Loosemore an idiot" to "LW should be expected to shoot the messenger". Y and L had a history of repeatedly-unproductive interactions in the past; L's paper pretty much called Y an idiot anyway (by implication, not as frankly as Y called L an idiot); there's a pretty decent case to e made that L was an idiot in the relevant sense; other LWers did not shoot Loosemore even when EY did, and when his objections were brought up again a few years later there was no acrimony.

[EDITED to add:] And of course this is only one case; even if Loosemore were a 100% typical example of someone making an objection to EY's arguments, and even if we were interested only in EY's behaviour and not anyone else, the inference from "EY was obnoxious to RL" to "EY generally shoots the messenger" is still pretty shaky.

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-14T22:33:00.510Z · LW · GW

OK, we have a bit of a move in the direction of actually providing some concrete information here, which is nice, but it's still super-vague.

Also, your complaint now seems to be entirely different from your original complaint. Before, you were saying that LW should be expected to "shoot the messenger". Now, you're saying that LW ignores the messenger. Also bad if true, of course, but it's an entirely different failure mode.

So, anyway, I thought I'd try a bit of an experiment. I'm picking random articles from the "Original Sequences" (as listed in the LW wiki), then starting reading the comments at a random place and finding the first substantial objection after there (wrapping round to the start if necessary). Let's see what we find.

  • "Timeless Identity": user poke says EY is attacking a strawman when he points out that fundamental particles aren't distinguishable, because no one ever thinks that our identity is constituted by the identity of our atoms, because everyone knows that we eat and excrete and so forth.
    • In the post itself, EY quotes no less a thinker than Derek Parfit arguing that maybe the difference between "numerical identity" and "qualitative identity" might be significant, so it can't be that strawy; and the point of the post is not simply to argue against the idea that our identity is constituted by the identity of our atoms. So I rate this objection not terribly strong. It doesn't seem to have provoked any sort of correction, nor do I see why it should have; but it also doesn't seem to have provoked any sort of messenger-shooting; it's sitting at +12.
  • "Words as Hidden Inferences": not much substantive disagreement. Nearest I can find is a couple of complaints from user Caledonian2, both of which I think are merely nitpicks.
  • "The Sacred Mundane": user Capla disagrees with EY's statement that when you start with religion and take away the concrete errors of fact etc., all you have left is pointless vagueness. No, Capla says, there's also an urge towards heroic moral goodness, which you don't really find anywhere else.
    • Seems like a reasonable counterpoint. Doesn't seem like it got much attention, which is a shame; I think there could have been an interesting discussion there.
    • "The Sacred Mundane" is one of the posts shown on the "Original Sequences" page in italics to indicate that it's "relatively unimportant". I assume it isn't in RAZ. I doubt that's because of Capla's objection.
  • "Why Truth?": not much substantive disagreement.
  • "Nonperson Predicates": not much substantive disagreement, and this feels like a sufficiently peripheral issue anyway that I couldn't muster much energy to look in detail at the few disagreements there were.
  • "The Proper Use of Doubt": a few people suggest in comments that (contra what EY says in the post) there is potential value even in doubts that never get resolved. I think they're probably right, but again this is a peripheral point (since I think I agree with EY that the main point of doubting a thing you believe is to prompt you to investigate enough that you either stop believing it or confirm your belief in it) on a rather peripheral post.

Not terribly surprisingly (on either your opinions or mine, I think), this random exploration hasn't turned up anything with a credible claim to be a demolition of something important to LW thinking. It also hasn't turned up anything that looks to me like messenger-shooting, or like failing to address important major criticisms, and my guess at this point is that if there are examples of those then finding them is going to need more effort than I want to put in. Especially as you claim you've already got those examples! Please, share some of them with me. 

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-14T21:18:43.980Z · LW · GW

Yup, academics revise and retract things. So, where are Richard Feynman's errata? Show me.

The answer, I think, is that there isn't a single answer to that question. Presumably there are some individual mistakes which he corrected (though I don't know of, e.g., any papers that he retracted) -- the analogues of the individual posts I listed a few of above. But I don't know of any case where he said "whoops, I was completely wrong about something fundamental", and if you open any of his books I don't think you'll find any prominent list of mistakes or anything like that.

As you say, science is noted for having very good practices around admitting and fixing mistakes. Feynman is noted for having been a very good scientist. So show me how he meets your challenge better than Less Wrong does.

No, you haven't "already told me" concrete examples. You've gestured towards a bunch of things you claim have been refuted, but given no details, no links, nothing. You haven't said what was wrong, or what would have been right instead, or who found the alleged mistakes, or how the LW community reacted, or anything.

Unless I missed it, of course. That's always possible. Got a link?

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-14T19:14:50.647Z · LW · GW

I don't understand your first question. I can't tell that I'm not, because (as you say) it's possible that I am. Did I say something that looked like "I know that I am not in any way suffering from confirmation bias"? Because I'm pretty sure I didn't mean to.

Also, not suffering from confirmation bias (in general, or on any particular point) is a difficult sort of thing to get concrete evidence of. In a world where I have no confirmation bias at all regarding some belief of mine, I don't think I would expect to have any evidence of that that I could point to.

What official LW positions would you expect there to be errata for?

(Individual posts on LW sometimes get retracted or corrected or whatever: see e.g. "Industry Matters 2: Partial Retraction" where Sarah Constantin says that a previous post of hers was wrong about a bunch of things, or "Using the universal prior for logical uncertainty (retracted)" where cousin_it proposed something and retracted it when someone found an error. I don't know whether Scott Alexander is LW-adjacent enough to be relevant in your mind, but he has a page of notable mistakes he's made. But it sounds as if you're looking more specifically for cases where the LW community has strongly committed itself to a particular position and then officially decided that that was a mistake. I don't know of any such cases, but it's not obvious to me why there should be any. Where are your errata in that sense? Where are (say) Richard Feynman's? If you have in mind some concrete examples where LW should have errata, they might be interesting.)

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-14T18:49:06.448Z · LW · GW

For what it's worth, I think it's possible that he is, in the relevant sense. As I said elsewhere, the most likely scenario in which EY is wrong about RL being an "idiot" (by which, to repeat, I take it he meant "person obstinately failing to grasp an essential point") is one in which on the relevant point RL is right and EY wrong, in which case EY would indeed be an "idiot".

But let's suppose you're right. What of it? I thought the question here was whether LW people shoot the messenger, not whether my opinions of Eliezer Yudkowsky and Richard Loosemore are perfectly symmetrical.

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-14T17:44:05.081Z · LW · GW

Better explained in your opinion, since I'm asking you to give some examples.

Obviously it's possible that you'll think something is a neutral presentation of evidence that something's wrong, and I'll think it's an attack. Or that you'll think it's a watertight refutation of something, and I won't. Etc. Those things could happen if I'm too favourably disposed to the LW community or its ideas, or if you're too unfavourably disposed, or both. In that case, maybe we can look at the details of the specific case and come to some sort of agreement.

If you've already decided in advance that if something's neutral then I'll see it as an attack ... well, then which of us is having trouble with confirmation bias in that scenario?

Comment by gjm on GPT-Augmented Blogging · 2021-09-14T13:28:42.942Z · LW · GW

Just to be clear, the paragraph beginning "GPT-3 is really good at generating titles. It's better than I am." was written by GPT-3? That's pretty funny. (And many of the titles on the list are in fact pretty good titles and probably would get a lot of clicks if posted on LW.)

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-14T13:22:50.175Z · LW · GW

I do indeed consider that your evidence ("Eliezer Yudkowsky called Richard Loosemore an idiot!") is not good enough to establish the claim you were making ("we should expect LW people to shoot the messenger if someone reports a refutation of an idea that's been important here").

However, the point isn't that "very strong evidence is needed", the point is that the evidence you offered is very weak.

(Maybe you disagree and think the evidence you offered is not very weak. If so, maybe you'd like to explain why. I've explained in more detail why I think it such weak evidence, elsewhere in this thread. Your defence of it has mostly amounted to "it is so an ad hominem", as if the criticism had been "TAG says it was an ad hominem but it wasn't"; again, I've explained elsewhere in the thread why I think that entirely misses the point.)

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-14T13:17:39.941Z · LW · GW

Yes, too-strong conventions against nastiness are bad. It doesn't look to me as if we have those here, any more than it looks to me as if there's much of a shooting-the-messenger culture.

I've been asking you for examples to support your claims. I'll give a few to support mine. I'm not (at least, not deliberately) cherry-picking; I'm trying to think of cases where something has come along that someone could with a straight face argue is something like a refutation of something important to LW:

  • Someone wrote an article called "The death of behavioral economics". Behavioural economics is right up LW's street, and has a lot of overlap with the cognitive-bias material in the "Sequences". And the article specifically attacks Kahneman and Tversky (founders of the whole heuristics-and-biases thing), claiming that their work on prospect theory was both incompetent and dishonest. So ... one of the admins of LW posted a link to it saying it was useful, and that linkpost is sitting on a score of +143 right now.
  • Holden Karnofsky of GiveWell took a look at the Singularity Institute (the thing that's now called MIRI) as a possible recipient of donations and wrote a really damning piece about it. Luke Muelhauser (then the executive director of the SI) and Eliezer Yudkowsky (founder of the SI) responded by ... thanking HK for his comments and agreeing that there was a lot of merit to his criticisms. That really damning piece is currently on a score of +325. (Hazy memory tells me that the highest-voted thing ever on LW is Yvain's "Diseased thinking about disease"; I may or may not be right about that, but at any rate that one's on +393. Just for context.)

and I had a look for comments that were moderately nasty but with some sort of justification, to see how they were received:

  • Consider Valentine's post about enlightenment experiences. Here are some of the things said in response: Ben Pace's comment, saying serious meditation seems likely to be a waste of time, its advocates can't show actual evidence of anything useful, Valentine is quite likely just confused, etc. +60. Said Achmiz's comment, making similar points similarly bluntly. +31. clone of saturn's response to that, calling other things said in the discussion "pretty useless" and "obnoxious". +29. There are plenty more examples in that discussion of frankness-to-the-point-of-rudeness getting (1) positive karma and (2) constructive responses.

Unfortunately, searching for moderately-nasty-but-at-least-kinda-justified comments is difficult because (1) most comments aren't of that kind and (2) it's not the sort of thing that e.g. Google can help very much with. (I did try searching for various negative words but that didn't come up with much.)

But, overall, I repeat that my impression is that the usual LW response to criticism is to take it seriously, and that LW is not so intolerant of negativity as to be a problem. I am willing to be persuaded that I'm wrong, but I'd want to see some actual evidence rather than just a constant tone of indignation that I won't make the leap from "Eliezer called another AI researcher an idiot once" to any broad conclusion about how LW people respond to criticism.

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-13T23:34:47.147Z · LW · GW

I think ChristianKl gave one excellent rational reason to treat the two comments differently: all else being equal, being nice improves the quality of subsequent discussion and being nasty makes it worse, so we should apply higher standards to nastiness than to niceness.

Another rational reason to treat them differently would be if one of them is more plausible, given the available evidence, than the other. I've already explained at some length why I think that's the case here. Perhaps others feel the same way. Of course you may disagree, but there is a difference between "no rational reason" and "no reason TAG agrees with".

I have given some reasons why I think my claim more plausible than yours. I'm sorry if you find that opinion none the less "not founded on anything". It seems to me that if we want a foundation firmer than your or my handwaving about what sorts of things the LW community is generally likely to do, we should seek out concrete examples. You implied above that you have several highly-relevant concrete examples ("criticism of the lesswrongian version of Bayes, and the lesswrongian version of Aumann, and the lesswrongian version of Solomonoff, and of the ubiquitous utility function, and the MWI stuff...") where someone has provided a refutation of things widely believed on LW; I don't know what specific criticisms you have in mind, but presumably you do; so let's have some links, so we can see (1) how closely analogous the cases you're thinking of actually were and (2) how the LW community did in fact react.

I'm finding this discussion frustrating because it feels as if every time you refer to something I said you distort it just a little, and then I have to choose between going along with the wrong version and looking nitpicky for pointing out the distortion. On this occasion I'll point out a distortion. I didn't say that your claim "needs" to be supported by evidence. In fact, I literally wrote "You don't have to provide evidence". I did ask the question "what's your evidence?" and claimed that that was a reasonable question. I find it kinda baffling that you think the idea that "what's your evidence?" is a reasonable question is a claim that's not "founded on anything"; it seems to me that that's pretty much always a reasonable question when someone makes an empirical claim. (I think the only exceptions would be where it's already perfectly obvious what the evidence is, or maybe where the claim in question is already the conventional wisdom and no one's offered reasons to think that it might be wrong.)

For the avoidance of doubt, I am not claiming that when that's a reasonable question it's then reasonable e.g. to demand a detailed accounting of the evidence and assume the claim being made is false if the person making it doesn't give one. "I don't have any concrete evidence; it just feels that way to me" could be a perfectly reasonable answer; so could "I don't remember the details, but if you look at X and Y and Z you should find all the evidence you need"; so could "No concrete evidence, but this situation looks just like such-and-such another situation where X happened"; etc.

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-13T23:19:06.680Z · LW · GW

The larger point here is that the link between "Eliezer Yudkowsky called Richard Loosemore an idiot" and "People on Less Wrong should be expected to shoot the messenger if someone turns up saying that something many of them believe is false" is incredibly tenuous.

I mean, to make that an actual argument you'd need something like the following steps.

  • EY called RL an idiot.
  • EY did not have sufficient grounds for calling RL an idiot.
  • EY was doing it because RL disagreed with him.
  • EY has/had a general practice of attacking people who disagree with him.
  • Other people on LW should be expected to behave the same way as EY.
  • So if someone comes along expressing disagreement, we should expect people on LW to attack them.

I've been pointing out that the step from the first of those to the second is one that requires some justification, but the same is true of all the others.

So, anyway: you're talking as if you'd said "EY's comment was an ad hominem attack" and I'd said "No it wasn't", but actually neither of those is right. You just quoted EY's comment and implied that it justified your opinion about the LW population generally; and what I said about it wasn't that it wasn't ad hominem. It was a personal attack (I wouldn't use the specific term "ad hominem" because that too strongly suggests the fallacy of argumentum ad hominem, which is when you say "X's claim is wrong because X is an idiot" or something like that, and that isn't what EY was doing; but it was, for sure, a personal attack). I just don't think it's much evidence of a general messenger-shooting tendency on LW, and I think that to make it into evidence of that you'd need to justify each step in (something like) the chain of propositions above, and you haven't made the slightest attempt to do so. And that, not whether it was an ad hominem, is what we are disagreeing about.

Some comments on those steps. First step: Yes, EY certainly called RL an idiot, though I don't think what he meant by it was quite the usual meaning of "idiot" and in particular I think what he meant by it is more compatible with being a professional AI researcher than the usual meaning of "idiot" is; specifically, I think he meant something like "There are fundamental points in the arguments I've been making that RL obstinately fails to grasp, and it seems no amount of discussion will show him the error of his ways". Obstinately failing to grasp a particular point is, alas, a rather common failure mode even of many otherwise very impressive human brains. Note that if EY is wrong about this, the most likely actual situation is that EY is obstinately failing to grasp a (correct) fundamental point being made by RL. So one way or another, a professional AI researcher is an idiot in the relevant sense. So that circumstance is not so extraordinary that when someone claims it's so we should jump to the conclusion that they are being unreasonable.

Second step: Kinda: my understanding is that EY and RL had been around more or less the same argumentative circle many times and made no progress. I think EY would have been clearly justified in saying "either RL is an idiot or I am"; I shall not try to pass judgement on how reasonable it was for him to be confident about which of them was missing something fundamental. Third step: No: I'm pretty sure EY was as forceful as he was because of past history of unproductive discussions with RL, and would likely not have said the same if someone else had raised the same issues as RL did, even though he'd have disagreed with them just as much. Fourth step: Kinda; while I don't think it would be fair to expect EY to attack someone just because they disagreed, I do think he is generally too quick to attack. Fifth step: No, not at all; one person's behaviour is not a reliable predictor of another's. "Oh, but EY is super-high-status around here and everyone admires him!" Well, note for instance that the comment we're discussing is sitting on a score of -23 right now; maybe the LW community admires Eliezer, but they don't seem to admire this particular aspect. Sixth and final step: Not really: as I already said, I don't think EY has a general messenger-shooting policy, so even if the LW community imitated everything he did we would not be justified in expecting them to do that.

Comment by gjm on Review of A Map that Reflects the Territory · 2021-09-13T19:07:02.006Z · LW · GW

There's still a well-defined answer to the question of what the digits mean, and indeed of what they mean as digits of pi; e.g., the hundred-billionth digit of pi is what you get by carrying out a pi-computing algorithm and looking at the hundred-billionth digit of its output. Anyway, no one is memorizing that many digits of pi.

[EDITED to add:] On the other hand, people certainly memorize enough digits of pi that, e.g., an error in the last digit they memorize would make a sub-Planck-length difference to the length of a (euclidean-planar) circle whose diameter is that of the observable universe. (Size of observable universe is tens of billions of light-years; a year is 3x10^7 seconds so that's say 10^18 light-seconds; light travels at 3x10^8 m/s so that's < 10^27m; I forget just how short the Planck length is but I'm pretty sure it's > 10^-50m; so 80 digits should be enough, and even I have memorized that many digits of pi (and forgotten many of them again).

Comment by gjm on Review of A Map that Reflects the Territory · 2021-09-13T19:05:17.491Z · LW · GW

Pedantic note: it's "gjm", not "gym"; they're my initials.

Comment by gjm on Review of A Map that Reflects the Territory · 2021-09-12T21:56:19.721Z · LW · GW

A few clarifications on the "new words" section:

 

  • Mediocristan and Extremistan are terms coined (I think) by Nassim Nicholas Taleb, in his book The Black Swan. They don't exactly mean what you say they do; the idea is that Mediocristan is an imaginary country where things have thin-tailed (e.g., normal) distributions and so differences are usually modest in size, and Extremistan is an imaginary country where things have fat-tailed (e.g., power-law) distributions and so differences are sometimes huge, and then you say something belongs to Mediocristan or Extremistan depending on whether the associated distributions are thin-tailed or fat-tailed.
  • The ones from EY's "Local Validity ..." post are, I think, just made up for the occasion and he intends it to be obvious from context at least roughly what they mean.
    • "Code of the Light": how good, principled, rational, nice, honest people behave.
    • "Straw authoritarians": strawman-authoritarians: that is, authoritarians who are transparently stupid and malicious, rather than whatever the most defensible sort of authoritarian might be.
  • The "memetic collapse" thing is a link, to a (spit) Facebook post by EY where he says this: "The Internet is selecting harder on a larger population of ideas, and sanity falls off the selective frontier once you select hard enough [...] the Internet, and maybe television before it, selected much more harshly from a much wider field of memes; and also allowed tailoring content more narrowly to narrower audiences [...] We're looking at a collapse of reference to expertise because deferring to expertise costs a couple of hedons compared to being told that all your intuitions are perfectly right, and at the harsh selective frontier there's no room for that. We're looking at a collapse of interaction between bubbles because there used to be just a few newspapers serving all the bubbles; and now that the bubbles have separated there's little incentive to show people how to be fair in their judgment of ideas for other bubbles [...] It seems plausible to me that *basic* software for intelligent functioning is being damaged by this hypercompetition [...] If you look at how some bubbles are talking and thinking now, "intellectually feral children" doesn't seem like entirely inappropriate language". In other words: changes in how communication works have enabled processes that systematically make us stupider, less tolerant, etc., and also get off my lawn.
  • "Glomarization" does yield search-engine hits when you spell it right; one of them is a wikipedia page entitled "Glomar response" which explains it pretty clearly.
  • I don't think memorizing the Bible or the digits of pi is a great example of "counterfeit understanding"; some of the people who memorize the Bible have a pretty good understanding of what it means, and people who memorize the digits of pi generally (I think) understand what digits mean. One of the best expositions of the counterfeit-understanding thing I know of comes from Richard Feynman writing about the terrible state of science education in Brazil at one time (I don't know whether it's improved since then); see e.g. here.
Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-12T18:30:19.967Z · LW · GW

(Note: you posted two duplicate comments; I've voted this one up and the other one down so that there's a clear answer to the question "which one is canonical?". Neither the upvote nor the downvote indicates any particular view of the merits of the comment.)

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-12T18:29:23.691Z · LW · GW

Well, maybe he is. If you're going to use "Brown accused Smith of beating his wife" as evidence that Brown is terrible and so is everyone associated with him, it seems like some evidence that Brown's wrong would be called for. (And saying "Smith is a bishop" would not generally be considered sufficient evidence, even though presumably most bishops don't beat their wives.)

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-12T00:04:03.090Z · LW · GW

If your original comment had said "at least some", I would have found it more reasonable.

So, anyway, it seems that you think that "the lesswrongian version of Bayes", and likewise of Aumann, and Solomonoff, and "the ubiquitous utility function", and "the MWI stuff", have all been outright refuted, and the denizens of Less Wrong have responded by shooting the messenger. (At least, I don't know how else to interpret your second paragraph.)

Could you maybe give a couple of links, so that I can see these refutations and this messenger-shooting?

(I hold no particular brief for "the lesswrong version of" any of those things, not least because I'm not sure exactly what it is in each case. Something more concrete might help to clarify that, too.)

I think ChristianKl is correct to say that lazy praise is better (because less likely to provoke defensiveness, acrimony, etc.) than lazy insult. I also think "LW people will respond to an interesting mathematical question about the foundations of decision theory by investigating it" is a more reasonable guess a priori than "LW people will respond to ... by attacking the person who raises it because it threatens their beliefs". Of course the latter could in fact be a better prediction than the former, if e.g. there were convincing prior examples; but that's why "what's your evidence?" is a reasonable question.

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-10T03:43:37.194Z · LW · GW

I entirely agree that it's possible that someone might come along with something that is in fact a refutation of the idea that a reasonable set of requirements for rational thinking implies doing something close to probability-plus-Bayesian-updating, but that some people who are attached to that idea don't see it as a refutation.

I'm not sure whether you think that I'm denying that (and that I'm arguing that if someone comes along with something that is in fact a refutation, everyone on LW will necessarily recognize it as such), or whether you think it's an issue that hasn't occurred to me; neither is the case. But my guess -- which is only a guess, and I'm not sure what concrete evidence one could possibly have for it -- is that in most such scenarios at least some LWers would be (1) interested and (2) not dismissive.

I guess we could get some evidence by looking at how similar things have been treated here. The difficulty is that so far as I can tell there hasn't been anything that quite matches. So e.g. there's this business about Halpern's counterexample to Cox; this seems to me like it's a technical issue, to be addressed by tweaking the details of the hypotheses, and the counterexample is rather far removed from the realities we care about. The reaction here has been much more "meh" than "kill the heretic", so far as I can tell. There's the fact that some bits of the heuristics-and-biases stuff that e.g. the Sequences talk a lot about now seem doubtful because it turns out that psychology is hard and lots of studies are wrong (or, in some cases, outright fraudulent); but I don't think much of importance hangs on exactly what cognitive biases humans have, and in any case this is a thing that some LW types have written about, in what doesn't look to me at all a shoot-the-messenger sort of way.

Maybe you have a few concrete examples of messenger-shooting that are better explained as hostile reaction to evidence of being wrong rather than as hostile reaction to actual attack? (The qualificatoin is because if you come here and say "hahaha, you're all morons; here's my refutation of one of your core ideas" then, indeed, you will likely get a hostile response, but that's not messenger-shooting as I would understand it.)

I heartily agree that having epistemic double standards is very bad. I have the impression that your comment is intended as an accusation of epistemic double standards, but I don't know whom you're accusing of exactly what epistemic double standards. Care to be more specific?

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-10T03:22:47.433Z · LW · GW

You don't have to provide evidence. I'm asking you to because it would help me figure out how much truth there is in your accusation. You might be able to give reasons that don't exactly take the form of evidence (in the usual sense of "evidence"), which might also be informative. If you can't or won't provide evidence, I'm threatening no adverse consequence other than that I won't find your claim convincing.

If the fact that my original guess at what LW folks would do in a particular situation isn't backed by anything more than my feeling that a lot of them would find the resulting mathematical and/or philosophical questions fun to think about means that you don't find my claim convincing, fair enough.

Anything is not necessarily true.

For sure. But my objection wasn't "this is not necessarily true", so I'm not sure why that's relevant.

... Maybe I need to say explicitly that when I say that it's "possible" to be both an AI researcher and what I take Eliezer to have meant by an idiot, I don't merely mean that it's not a logical impossibility, or that it's not precluded by the laws of physics; I mean that, alas, foolishness is to be found pretty much everywhere, and it's not tremendously unlikely that a given AI researcher is (in the relevant sense) an idiot. (Again, I agree that AI researchers are less likely to be idiots than, say, randomly chosen people.)

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-09T16:44:02.795Z · LW · GW

Oh, you mean my claim that if someone comes along with an outright refutation of the idea that belief-structures ought to be probability-like then LWers would be excitedly trying to figure out what they could look like instead?

I'm not, for the avoidance of doubt, making any claims that LWers have particularly great intellectual integrity (maybe they do, maybe not) -- it's just that this seems like the sort of question that a lot of LWers are very interested in.

I don't understand what you mean by "That's an objection that could be made to anything". You made a claim and offered what purported to be support for it; it seems to me that the purported support is a long way from actually supporting the claim. That's an objection that can be made to any case where someone claims something and offers only very weak evidence in support of it. I don't see what's wrong with that.

I'm not making any general claim that "lesswrong will abandon long held beliefs quickly and willingly". I don't think I said anything even slightly resembling that. What I think is that some particular sorts of challenge to LW traditions would likely be very interesting to a bunch of LWers and they'd likely want to investigate.

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-07T01:33:37.458Z · LW · GW

My evidence for what?

Yes, I agree that AI researchers are less often idiots than randomly chosen people. It's still possible to be both. For the avoidance of doubt, I'm not claiming that Loosemore is an idiot (even in the rather loose sense that I think EY meant); maybe he is, maybe he isn't. The possibility that he isn't is just one of the several degrees of separation between your offered evidence (EY called someone an idiot once) and the claim it seems to be intended to support (LW readers in general will shoot the messenger if someone turns up saying something that challenges their opinions).

Comment by gjm on Why there are no online CFAR workshops? · 2021-09-06T22:17:24.256Z · LW · GW

I would also find that a little alarming. How alarming would depend on details. Is this meditation retreat basically just an opportunity for quiet largely-isolated meditation? (In that case, saying "keep the rest of the world at arm's length while you're doing this" seems eminently reasonable.) Is it also going to be filled with, for want of a better word, indoctrination? (In that case, not so reasonable.) Is the given reason something like "to avoid distractions"? (That seems very reasonable.) Or is it something more like "you are better off not being in contact with people whose opinions might differ"? (That would be alarming.)

Your description of what CFAR said (which I appreciate I may be misunderstanding, or you may be reporting in good faith but with less than 100% accuracy) seems to me like it's leaning in the more-alarming direction.

If I take it exactly at face value, it's not so alarming. But what you describe seems like exactly the sort of thing I would expect them to say in a world where their purposes are a bit nefarious ("attempt to rewrite participants' values to bring their goals nearer ours", as opposed to "help participants reflect on their own goals and achieve them"). This is a concern it's worth having because it seems like this sort of slight nefariousness is something of an attracting state for seminars of this kind.

Comment by gjm on Why there are no online CFAR workshops? · 2021-09-06T10:01:44.470Z · LW · GW

For me, concern about "taking a workshop while staying in a house with people who aren't actively promoting the right kind of curiosity" rings really loud alarm bells.

Comment by gjm on [deleted post] 2021-09-06T09:56:08.636Z

I don't think that whether a post should be on the frontpage should be much influenced by what's being said in its comments by a third party.

I don't think I think we should be worried that something's going to do harm by spreading less-than-perfectly-accurate recollections when it says up front "These notes are not verbatim [...] While note-taking I also tended to miss parts of further answers, so this is far from complete and might also not be 100% accurate. Corrections welcome.". Lanrian's alternate versions don't seem so different to me as to make what p.b. wrote amount to "false gossip and rumors".

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-06T09:48:27.422Z · LW · GW

So your evidence that "LW readers will shoot the messenger" is that one time Eliezer Yudkowsky called a professional AI researcher a "known permanent idiot"?

This seems very unconvincing. (1) There is no reason why someone couldn't be both an idiot and a professional AI researcher. (I suspect that Loosemore thinks Yudkowsky is an idiot, and Yudkowsky is also a professional AI researcher, albeit of a somewhat different sort. If either of them is right, then a professional AI researcher is an idiot.) (2) "One leading LW person once called one other person an idiot" isn't much evidence of a general messenger-shooting tendency, even if the evaluation of that other person as an idiot was 100% wrong.

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-05T20:15:26.122Z · LW · GW

I didn't downvote you and don't claim any unique insight into the motives of whoever did, but I know I did think "that seems a low-effort low-quality comment", not because I think what you say is untrue (I don't know whether it is or not) but because you made a broad accusation and didn't provide any evidence for it. So far as I can tell, the only evidence you're offering now is that your comment got downvoted, which (see above) has other plausible explanations other than "because LW readers will shoot the messenger".

The obvious candidate for "the messenger" here would be Haziq Muhammad, but I just checked and every one of his posts and comments has a positive karma score. This doesn't look like messenger-shooting to me.

What am I (in your opinion) missing here?

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-05T03:14:41.409Z · LW · GW

Why?

Comment by gjm on Hope and False Hope · 2021-09-04T11:58:57.370Z · LW · GW

This, or something like it, is also one reason why the sorts of hopes and fears about AI that are common on LW are not so common in the rest of the world. "These people say that technological developments that might be just around the corner have the potential to reshape the world completely, and therefore we need to sink a lot of time and effort and money into worrying about 'AI safety'; well, we've heard that sort of thing before. We've learned not to dedicate ourselves to millenarian religions, and this is just the same thing in fancy dress."

It's a very sensible heuristic. It will fail catastrophically any time there is an outrageously large threat or opportunity that isn't easy to see. (Arguably it's been doing so over the last few decades with climate change. Arguably something similar is at work in people who refuse vaccination on the grounds that COVID-19 isn't so bad as everyone says it is.) Not using it will fail pretty badly any time there isn't an outrageously large threat or opportunity, but there's something that can be plausibly presented as one. I don't know of any approach actually usable by a majority of people that doesn't suffer one or the other of those failure modes.

Comment by gjm on Value is Fragile · 2021-09-04T11:07:57.856Z · LW · GW

When I consider this possible universe, I find that I do attach some value to the welfare of these sapient octopuses, and I do consider that it's a universe that contains plenty of value. (It depends somewhat on whether they have, as well as values resembling ours, something I can recognize as welfare; see my last couple of paragraphs above.) If there were a magic switch I could control, where one setting is "humans go extinct, no other advanced civilization ever exists" and the other is "humans go extinct, the sapient octopus civilization arises", I would definitely put it on the second setting, and if sufficiently convinced that the switch would really do what it says then I think I would pay a nonzero amount, or put up with nonzero effort or inconvenience, to put it there.

Of course my values are mine and your values are yours, and if we disagree there may be no way for either of us to persuade the other. But I'll at least try to explain why I feel the way I do. (So far as I can; introspection is difficulty and unreliable.)

First, consider two possible futures. 1: Humanity continues for millions of years, substantially unchanged from how we are now. (I take it we agree that in this case the future universe contains much of value.) 2: Humanity continues for millions of years, gradually evolving (in the Darwinian sense or otherwise) but always somewhat resembling us, and always retaining something like our values. It seems to me that here, too, the future universe contains much of value.

The sapient octopuses, I am taking it, do somewhat resemble us and have something like our values. Perhaps as much so as our descendants in possible future 2. So why should I care much less about them? I can see only one plausible reason: because our descendants are, in fact, our descendants: they are biologically related to us. How plausible is that reason?

Possible future 3: at some point in that future history of humanity, our descendants decide to upload themselves into computers and continue their lives virtually. Possible future 4: at some point in that virtual existence they decide they'd like to be embodied again, and arrange for it to happen. Their new bodies are enough like original-human bodies for them to feel at home in them, but they use some freshly-invented genetic material rather than DNA, and many of the internal organs are differently designed.

I don't find that the loss of biological continuity in these possible futures makes me not care about the welfare of our kinda-sorta-descendants there. I don't see any reason why it should, either. So if I should care much less about the octopuses, what matters must be some more generalized sort of continuity: the future-kinda-humans are our "causal descendants" or something, even if not our biological descendants.

At that point I think I stop; I can see how someone might find that relationship super-important, and care about "causal descendants" but not about other beings, physically and mentally indistinguishable, who happen not to be our "causal descendants"; but I don't myself feel much inclination to see that as super-important, and I don't see any plausible way to change anyone's mind on the matter by argument.

Comment by gjm on Is LessWrong dead without Cox’s theorem? · 2021-09-04T10:44:27.692Z · LW · GW

No, Less Wrong is probably not dead without Cox's theorem, for several reasons.

It might turn out that the way Cox's theorem is wrong is that the requirements it imposes for a minimally-reasonable belief system need strengthening, but in ways that we would regard as reasonable. In that case there would still be a theorem along the lines of "any reasonable way of structuring your beliefs is equivalent to probability theory with Bayesian updates".

Or it might turn out that there are non-probabilistic belief structures that are good, but that they can be approximated arbitrarily closely with probabilistic ones. In that case, again, the LW approach would be fine.

Or it might turn out probabilistic belief structures are best so long as the actual world isn't too crazy. (Maybe there are possible worlds where some malign entity is manipulating the evidence you get to see for particular goals, and in some such worlds probabilistic belief structures are bad somehow.) In that case, we might know that either the LW approach is fine or the world is weird in a way we don't have any good way of dealing with.

Alternatively, it might happen that Cox's theorem is wronger than that; that there are human-compatible belief structures that are, in plausible actual worlds, genuinely substantially different from probabilities-and-Bayesian-updates. Would LW be dead then? Not necessarily.

It might turn out that all we have is an existence theorem and we have no idea what those other belief structures might be. Until such time as we figure them out, probability-and-Bayes would still be the best we know how to do. (In this case I would expect at least some LessWrongers to be working excitedly on trying to figure out what other belief structures might work well.)

It might turn out that for some reason the non-probabilistic belief structures aren't interesting to us. (E.g., maybe there are exceptions that in some sense amount to giving up and saying "I dunno" to everything.) In that case, again, we might need to adjust our ideas a bit but I would expect most of them to survive.

Suppose none of those things is the case: Cox's theorem is badly, badly wrong; there are other quite different ways in which beliefs can be organized and updated, that are feasible for humans to practice and don't look at all like probabilities+Bayes, and that so far as we can see work just as well or better. That would be super-exciting news. It might require a lot of revision of ideas that have been taken for granted here. I would expect LessWrongers to be working excitedly on figuring out what things need how much revision (or discarding completely). The final result might be that LessWrong is dead, at least in the sense that the ways of thinking that have been common here all turn out to be very badly suboptimal and the right thing is to all convert to Mormonism or something. But I think a much more likely outcome in this scenario is that we find an actually-correct analogue of Cox's theorem, which tells us different things about what sorts of thinking might be reasonable, and it still involves (for instance) quantifying our degrees of belief somehow, and updating them in the light of new evidence, and applying logical reasoning, and being aware of our own fallibility. We might need to change a lot of things, but it seems pretty likely to me that the community would survive and still be recognizably Less Wrong.

Let me put it all less precisely but more pithily: Imagine some fundamental upheaval in our understanding of mathematics and/or physics. ZF set theory is inconsistent! The ultimate structure of the physical world is quite unlike the GR-and-QM muddle we're currently working with! This would be exciting but it wouldn't make bridges fall down or computers stop computing, and people interested in applying mathematics to reality would go on doing so in something like the same ways as at present. Errors in Cox's theorem are definitely no more radical than that.

Comment by gjm on Value is Fragile · 2021-09-04T09:21:58.112Z · LW · GW

Having disagreed with Zack many times in the past, it is a pleasure to say: I think this is absolutely right (except that I think I'd replace "pleasure and pain" with "something pleasure-like and something pain-like"); that bit of "Value is Fragile" is surely wrong, and the intuitions that drove the relevant bits of "Three Worlds Collide" are more reflective of how actual human value systems work.

I think I'd want to distinguish two related but separate issues here. (1) Should we expect that (some) other intelligent agents are things whose welfare we value? (Whether they are might depend on whether we think they have internal mechanisms that resemble our mechanisms of pleasure, pain, hope, fear, etc.) (2) Should we expect that (some) other intelligent agents share some of our values? (Whether they do would depend on how far the structure of their thinking has converged with ours.) If there are other intelligent species out there, then whether they're "animal-like organisms that feel pleasure and pain" addresses #1 and whether "the idealized values of those organisms are not a random utility function" addresses #2.

(Of course, how much we care about their welfare may depend on how much we think they share our values, for internalized-game-theory-ish reasons. And presumably they're likely to share more of our values if their motivational systems work similarly to ours. So the issues are not only related but interdependent.)

Comment by gjm on Covid 9/2: Long Covid Analysis · 2021-09-03T09:16:39.272Z · LW · GW

Is the Wordpress editor so unusably bad that it's unbearable (like >$300/year of annoyance) to use it just for swapping in the images? (That is: do what you do now, except that after pasting it in from the Google doc, you go through and do whatever you need to do to change each image from a hotlink to googleusercontent.com into a thing hosted by Wordpress -- which in an ideal world might just mean copying each image and pasting it on top of itself, though no doubt Wordpress have found a way to make it much more painful than that.)

Comment by gjm on Lakshmi's Magic Rope: An Intuitive Explanation of Ramanujan Primes · 2021-09-02T17:58:10.081Z · LW · GW

It looks as if jimv got there before me, in fact. (I hadn't seen his comment when I wrote mine. Sometimes some time elapses between when I open a page and when I actually read its contents, and sometimes I forget that I ought to refresh before commenting...)

Comment by gjm on Lakshmi's Magic Rope: An Intuitive Explanation of Ramanujan Primes · 2021-09-02T17:12:30.123Z · LW · GW

A correction and a philosophical quibble.

The correction: You say

Any whole number that's not a prime is the sum of two primes

but this is wholly false; for instance, the number 123 is not a prime (it's 3x41) and is not the sum of two primes (it can't be the sum of two odd primes since it's odd itself; the only even prime is 2, and the thing it's the-sum-of-2-and is 121, which is a square and therefore not a prime).

There's a famous conjecture (not proven as yet, though there was some exciting progress a couple of years back) saying that any even number from 4 up is the sum of two primes, but that's not the same thing.

The philosophical quibble: Then you say, after talking a bit more about making integers by adding up primes,

That's why primes are a little like the chemical elements

which seems quite wrong to me; there is an analogy between the chemical elements and the prime numbers, but it's about multiplication (every positive integer is the product of primes in essentially one way), not about addition (typically numbers are the sums of primes in many many ways).

Comment by gjm on Guide to Warwick, New York · 2021-09-01T22:47:06.241Z · LW · GW

I think what makes it apropos is that it's one of Zvi's blog posts, and Zvi is one of the people whose blog posts get auto-syndicated here on LW because they are sufficiently often LW-relevant. See also: jefftk, who sometimes posts very-LW-relevant things and sometimes posts "here's a cute thing my children did" or "here's some information about contra dancing", but all of it appears here.

(For the avoidance of doubt, none of the above is complaint or criticism; I too enjoy reading the not-very-LW-ish stuff written by these people and don't think it does any particular harm for it to appear on LW.)

[EDITED to add:] When I wrote the above, I didn't see that gwern had also replied saying something similar. My apologies for the redundancy. I don't think it's so redundant that I need to delete it :-).

Comment by gjm on Quote Quiz · 2021-08-31T01:16:32.375Z · LW · GW

For what it's worth, that was my immediate guess on reading the obfuscated quotation.

Comment by gjm on Death by a Single Cut · 2021-08-29T11:37:47.258Z · LW · GW

It is not perfectly clear whether this post is describing what the actual structure of your beliefs should be, or how you should present them. I think I disagree strongly in both cases.

Suppose I believe that my friend Albert has very recently converted to Christianity, because (1) Albert has told me so and (2) our mutual friend Beth tells me he told her so. These are both good evidence. Neither is conclusive; sometimes people make jokes, for instance. Neither is a crux; if it turned out, say, that I had merely had an unusually vivid dream in which Albert told me of his conversion, I would become less sure that it had actually happened but Beth's testimony would still make me think it probably had.

In this situation I could, I guess, say that I believe in Albert's conversion "because Albert and Beth both told me it was so". But that's purely artificial; one could do that with any set of justifications for a belief. And merely contradicting this belief would not suffice to change my opinion; removing either Albert's or Beth's testimony, but not the other, would falsify "both told me" but I would still believe it.

This is unusual mostly in being an artificially clean and simple case. I think most beliefs, or at any rate a large fraction, are like this. A thing affects the world in several ways, many of which may provide separate channels by which evidence reaches you.

This is true even in what's maybe the cruxiest of all disciplines, pure mathematics. I believe that there are infinitely many prime numbers because of Euclid's neat induction proof. But mathematics is subtle and sometimes people make mistakes, and maybe you could convince me that there's a long-standing mistake in that proof that sometimes [EDIT: of course this should have said "somehow"] I and every other mathematician had missed. But then I would probably (it might depend on the nature of the mistake) still believe that there are infinitely many prime numbers because there are other quite different proofs, like the one about the divergence of  or the one using  that proves an actual lower bound on how many primes there are, or the various proofs of the Prime Number Theorem. To some extent I would believe it merely because of the empirical evidence of the density of prime numbers, which (unlike say the distribution of zeros of the zeta function, the empirical evidence of which is also evidence that there are infinitely many primes) seems to be of a very robust kind. To make me change my mind about there being infinitely many prime numbers the proposition you would have to refute is something like "mathematics is not all bullshit".

(Sometimes a thing in pure mathematics has only a single known proof, or all the known proofs work in basically the same way. In that case, there may be an actual crux. But for theorems people actually care about this state of affairs often doesn't last; other independent proofs may be found.)

Outside mathematics things are less often cruxy, and I think usually sincerely so.

Finding cruxes is a useful technique, but there is not the slightest guarantee that there will be one to find.

Perhaps one should present one's beliefs cruxily even when they aren't actually cruxy, either in order to give others the best chance of presenting mind-changing evidence or to look open-minded? I don't think so; if your beliefs are not actually cruxy then lying about them will make it less likely that your mind gets changed when the evidence doesn't really support your current opinion, and if you get caught it will be bad for your reputation.

Comment by gjm on Is top-down veganism unethical? · 2021-08-24T03:31:50.139Z · LW · GW

I think the actual question is somewhere intermediate between those two. (Developers of brainless chickens might not be willing to wait a few years before seeing sales; supermarkets might not be willing to adopt brainless chicken meat widely before seeing evidence that plenty of people would buy it; etc.)

But I think the answer even to your second question is no, for the same reasons I already gave; I don't think those things will change easily.

Comment by gjm on The Death of Behavioral Economics · 2021-08-22T22:55:21.151Z · LW · GW

Link to pre-publication (but presumably near-identical) version of the 2018 paper: https://ie.technion.ac.il/~yeldad/Y2018.pdf.

Comment by gjm on Is top-down veganism unethical? · 2021-08-22T22:44:41.447Z · LW · GW

I think the top-down approach is futile, because I bet that only a small fraction of vegetarians and vegans will be willing to eat (say) anencephalous chicken meat, which means there's unlikely to be a viable market for it.

Two reasons for this:

1. When you make it a habit not to eat some variety of food, especially if you're doing it for some sort of moral reasons, you will almost certainly come to associate that variety of food with moral disapproval, disgust, etc. The bits of your brain that learn these things are not subtle enough to make you feel those things in the presence of a plate of ordinary chicken meat but not in the face of an identical-looking plate of anencephalous chicken meat.

2. Just how much do you trust meat producers, anyway? Especially if you are vegetarian or vegan? If someone puts a plate of chicken meat in front of you and says "we guarantee that this was made from chickens genetically engineered not to feel suffering", are you going to be sure they're neither lying nor mistaken? That no one else down the line was lying or mistaken, even though in many cases there's strong motivation for them to think (or say they think) that no suffering was involved even if they don't really have good evidence for that, even though anencephalous chicken meat looks exactly the same as ordinary chicken meat? I don't think it would take much doubt to make a typical vegetarian or vegan unwilling to eat it.

Comment by gjm on Is top-down veganism unethical? · 2021-08-22T22:36:19.541Z · LW · GW

I think there's space between "eat only meals of Michelin-star quality" and "frequently eat at fast-food chains or cook from low-quality meat". In particular, I believe myself to live in that space; I cook my own meals, am competent but not at Michelin-star level, and never eat at fast-food joints.

I don't know how common this is, but I doubt I'm the only one.

Also, I claim the following is a coherent position and suspect that quite a few hold it: "It may well be true that in some sense the best vegan pseudomeat is of higher quality than most fast food burgers. However, it happens that I like fast food burgers, or at least some of them, and vegan pseudomeat burgers are less appealing to me whatever 'quality' they are credibly alleged to have in the abstract."

(Having said all of which, I am pretty sure all of us have sub-delicious meals fairly frequently, sometimes even by choice, and I've had delicious vegan food as well as delicious carnivore food, and I think "animal suffering is a big enough deal to outweigh my preference for delicious food" is a perfectly reasonable position that doesn't merit mockery.)

Comment by gjm on Factors of mental and physical abilities - a statistical analysis · 2021-08-18T14:15:52.012Z · LW · GW

(You seem to have put your comments in the quote-block as well as the thing actually being quoted.)

Since immediately after the bit you quote OP said:

No. A perfect fit would only mean that, across a population, a single number would describe how people do on tests (except for the "noise"). It does not mean that number causes test performance to be correlated.

it doesn't seem to me necessary to inform them that "determines" implies causation or that factor analysis doesn't identify what causes what.

(Entirely unfairly, I'm amused by the fact that you write '"Determines" is a causal word' and then in the very next sentence use the word "determine" in a non-causal way. Unfairly because all that's happening is that "determine" means multiple things, and OP's usage does indeed seem to have been causal. But it may be worth noting that if the model were perfect, then indeed g would "determine how good we are at thinking" in the same sense as that in which factor analysis doesn't "determine causality for you" but one might have imagined it doing so.)

Comment by gjm on Power vs Precision · 2021-08-17T21:21:55.667Z · LW · GW

OK, yes: I agree that that is a possible distinction and that someone could believe both those things. And, duh, if I'd read what you wrote more carefully then I would have understood that that was what you meant. ("... because when giving advice they tend to model the other person as virtuous enough to overcome the temptation to stay out in the sunlight longer.") My apologies.

Comment by gjm on Power vs Precision · 2021-08-17T18:33:10.094Z · LW · GW

I'm not sure what distinction you're making when you say someone might believe both of

  • putting on sunscreen has a statistical tendency to make people stay out in the sun longer, which is net negative wrt skin cancer
  • all else being equal, putting on sunscreen is net positive wrt skin cancer

If the first means only that being out in the sun longer is negative then of course it's easy to believe both of those, but then "net negative" is entirely the wrong term and no one would describe the situation by saying anything like "wearing sunscreen actually tends to increase risk of skin cancer".

If the first means that the benefit of wearing sunscreen and the harm of staying in the sun longer combine to make something net negative, then "net negative" is a good term for that and "tends to increase risk" is fine, but then I don't understand how that doesn't flatly contradict the second proposition.

What am I missing?