"Future of Go" summit with AlphaGo 2017-04-10T11:10:40.249Z · score: 3 (4 votes)
Buying happiness 2016-06-16T17:08:53.802Z · score: 37 (40 votes)
AlphaGo versus Lee Sedol 2016-03-09T12:22:53.237Z · score: 19 (19 votes)
[LINK] "The current state of machine intelligence" 2015-12-16T15:22:26.596Z · score: 3 (4 votes)
[LINK] Scott Aaronson: Common knowledge and Aumann's agreement theorem 2015-08-17T08:41:45.179Z · score: 15 (15 votes)
Group Rationality Diary, March 22 to April 4 2015-03-23T12:17:27.193Z · score: 6 (7 votes)
Group Rationality Diary, March 1-21 2015-03-06T15:29:01.325Z · score: 4 (5 votes)
Open thread, September 15-21, 2014 2014-09-15T12:24:53.165Z · score: 6 (7 votes)
Proportional Giving 2014-03-02T21:09:07.597Z · score: 6 (14 votes)
A few remarks about mass-downvoting 2014-02-13T17:06:43.216Z · score: 27 (44 votes)
[Link] False memories of fabricated political events 2013-02-10T22:25:15.535Z · score: 17 (20 votes)
[LINK] Breaking the illusion of understanding 2012-10-26T23:09:25.790Z · score: 19 (20 votes)
The Problem of Thinking Too Much [LINK] 2012-04-27T14:31:26.552Z · score: 7 (11 votes)
General textbook comparison thread 2011-08-26T13:27:35.095Z · score: 9 (10 votes)
Harry Potter and the Methods of Rationality discussion thread, part 4 2010-10-07T21:12:58.038Z · score: 5 (7 votes)
The uniquely awful example of theism 2009-04-10T00:30:08.149Z · score: 38 (48 votes)
Voting etiquette 2009-04-05T14:28:31.031Z · score: 10 (16 votes)
Open Thread: April 2009 2009-04-03T13:57:49.099Z · score: 5 (6 votes)


Comment by gjm on 2018 Review: Voting Results! · 2020-01-24T18:05:39.100Z · score: 4 (2 votes) · LW · GW

Ah, OK. I'm convinced :-).

Comment by gjm on 2018 Review: Voting Results! · 2020-01-24T13:09:14.956Z · score: 16 (6 votes) · LW · GW

So one user spent 465 of their 500 available votes to downvote "Realism about Rationality".

I wonder whether that reflects exceptionally strong dislike of that post, or whether it means that they voted "No" on that and nothing on anything else, and then the -30 is just what the quadratic-vote-allocator turned that into.

I suspect the latter, and further suspect that whoever it was might not have wanted their vote interpreted quite that way. (Not with much confidence in either case.)

If a similar system is used on future occasions, it might be a good idea to limit how strong votes are made for users who don't cast many votes. Of course you should be able to spend your whole budget on downvoting one thing you really hate, but you should have to do it deliberately and consciously.

Comment by gjm on Whipped Cream vs Fancy Butter · 2020-01-21T16:40:25.749Z · score: 4 (2 votes) · LW · GW

Fair. (Apart from the bit about having them simultaneously.) I didn't think of that because I wouldn't generally eat toast with nothing on it but butter.

Comment by gjm on Whipped Cream vs Fancy Butter · 2020-01-21T16:39:47.895Z · score: 2 (1 votes) · LW · GW

I'm in the UK. Dairy products here are commonly pasteurized, but to me UHT means something much more extreme which spoils the flavour and I certainly wouldn't expect cream to be UHT-ed. Is cream really UHT by default in the US? Ewww.

Comment by gjm on Whipped Cream vs Fancy Butter · 2020-01-21T12:34:44.206Z · score: 2 (1 votes) · LW · GW

Hmm, interesting. When I buy cream (from a supermarket; I guess they are very cautious) the date they put on it is generally about one week in the future. I've taken their word for it and bought it not too long before I need to use it. I should do some experiments...

Comment by gjm on Whipped Cream vs Fancy Butter · 2020-01-21T12:32:58.343Z · score: 2 (1 votes) · LW · GW

Tastiness (for me) isn't a scalar thing. You want different tastes in different contexts. (In some sense chocolate is far tastier than butter, but there are many purposes for which I would use butter and would not consider using chocolate. The same is true of bacon. I'm not sure there's any purpose for which chocolate and bacon are both suitable replacements for butter.)

Comment by gjm on Whipped Cream vs Fancy Butter · 2020-01-21T02:11:03.493Z · score: 2 (1 votes) · LW · GW

You can keep butter in your fridge for weeks and it will stay fresh enough to use. (If you're fussy you can scrape off a thin layer of slightly-oxidized butter from the surface.) You can't do that with cream. That doesn't matter so much if you have shops where you can conveniently buy fresh cream every week, which is pretty common these days but maybe used not to be.

As others have said, cream contains a lot more water than butter does. If you spread whipped cream on your toast, I'm pretty sure you'll get soggy toast.

Butter keeps approximately its consistency for much longer than whipped cream does. If you make sandwiches with whipped cream and take them to work or school for lunch, I'm pretty sure you'll end up with not-at-all-whipped cream making your sandwiches soggy.

If you're specifically buying fancy butter, you may want the flavour of fancy butter. This is not the same as the flavour of whipped cream. (Just how different depends on exactly what sort of fancy butter.)

It's not so easy (I think) to whip up cream in very small quantities. That "serving" looks to me like a distinctly larger amount of cream-or-butter than I'd want in contexts where I'm using butter but not cooking with it.

Generally, I'm not very sure why you would use whipped cream instead of butter. I mean, OK, it's a bit cheaper (if you ignore wastage and effort and so forth), but so are many other things: water, flour, sawdust. And while clearly whipped cream is more like butter than water, flour and sawdust are, I don't see that it's so much like butter as to serve the same purposes. It doesn't have the same taste, the same consistency, the same balance of nutrients, the same anything.

Comment by gjm on Go F*** Someone · 2020-01-16T23:38:16.217Z · score: 19 (7 votes) · LW · GW

You're probably right. It would be 10x more useful if it offered some specifics as to what's bad about the post, though. As it is, it's just a differently-shaped downvote.

Comment by gjm on Go F*** Someone · 2020-01-16T20:52:15.069Z · score: 6 (4 votes) · LW · GW

I have no idea whether this is intended as a compliment or a criticism.

Comment by gjm on Research: Rescuers during the Holocaust · 2020-01-12T23:44:38.647Z · score: 7 (3 votes) · LW · GW

There's something odd about this:

The bad news is that, ignoring the minority of proactive rescuers, there were no moral supermen.

If you're looking for moral supermen, why would you ignore the minority of proactive rescuers? Aren't they exactly what you're looking for? The fact that they're a small minority isn't a reason to ignore them -- no one expects "supermen" to be common.

Given that the post starts out by framing the issue in terms of "moral supermen" and saying it would be good to understand who they are and how they got that way, it seems a bit odd that it ends by deciding to ignore the only candidates we have for moral supermen, and saying that if you ignore those then there are no moral supermen.

(It may well be that in fact to an excellent approximation there weren't any moral supermen, and that even those proactive morally-motivated rescuers were that way for some specific reason that doesn't have much to do with generally better morality on their part. But why not at least look?)

Comment by gjm on Voting Phase of 2018 LW Review (Deadline: Sun 19th Jan) · 2020-01-12T23:36:31.344Z · score: 17 (6 votes) · LW · GW

The posts available for review are presented in (what I guess is) a consistent order that is (so far as I know) the same for everyone. I expect this to mean that posts presented earlier will get more votes. If, as seems plausible, most of the things in the list aren't bad enough to get a "no" vote from most voters, this means that there is a bias in favour of the earlier posts in the list.

Related: keeping track of which posts I've looked at is a bit of a pain. Obviously I can see which ones I've voted non-neutral for, but there's no obvious way to distinguish "decided to stick with neutral" from "haven't looked yet". So long as the order of presentation is consistent, I can just remember how far through the list I am, but (see above) it's not obviously a good thing for the order of presentation to be consistent. And this phenomenon incentivizes me to process posts in order, rather than deliberately counteracting the bias mentioned above by trying to look at them in a random order.

Comment by gjm on Voting Phase of 2018 LW Review (Deadline: Sun 19th Jan) · 2020-01-12T15:21:07.616Z · score: 11 (5 votes) · LW · GW

Some of these posts have almost no content in themselves but link to other places where the actual meat is. (The two examples I've just run into: "Will AI see sudden progress?" and "Specification gaming examples in AI".)

Are we supposed to vote on these as if they include that external content? If the highest-voted posts are made into some sort of collection, will the external content be included there?

Comment by gjm on Circling · 2020-01-10T17:37:25.641Z · score: 8 (4 votes) · LW · GW

I think this

I have lost some touch with what 'rationality' is, and I think the concept 'rationality' is less personally meaningful to me now.

is useful information in itself, in that it suggests that maybe Circling, or some other things that for whatever reason tend to accompany it, may tend to move people who do it away from LW-style "rationality" with time. Whether that's a good thing (because LW-style "rationality" is actually too narrow or something of the kind) or a bad thing (because LW-style "rationality" is still more or less the best that's on offer, and moving away from it almost inevitably means not thinking so clearly) is a separate question, of course.

Comment by gjm on Open & Welcome Thread - January 2020 · 2020-01-08T11:37:09.026Z · score: 3 (2 votes) · LW · GW

There are surveys, but I think it may have been a few years since the last one. In answer to your specific question, LWers tend to be smart and young, which probably means most are rich by "global" standards, most aren't yet rich by e.g. typical US or UK standards, but many of those will be in another decade or two. (Barring global economic meltdown, superintelligent AI singularity, etc.) I think LW surveys have asked about income but not wealth. E.g., here are results from the 2016 survey which show median income at $40k and mean at $64k; median age is 26, mean is 28. Note that the numbers suggest a lot of people left a lot of the demographic questions blank, and of course people sometimes lie about their income even in anonymous surveys, so be careful about drawing conclusions :-).

Comment by gjm on Dissolving Confusion around Functional Decision Theory · 2020-01-05T15:20:35.485Z · score: 4 (3 votes) · LW · GW

In the bomb example, it looks to me as if you have some bits backwards. I could be misunderstanding and I could have missed some, so rather than saying what I'll just suggest that you go through double-checking where you've written "left", "right", "empty", etc.

Comment by gjm on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-05T15:04:04.548Z · score: 9 (5 votes) · LW · GW

It's not obvious to me that either the difficulty or the techniques would be the same for those other objectives as for Brexit. A canny political operative uses techniques appropriate for specific goals, after all.

If your goal is simply to get the UK out of the EU, for whatever reason and in whatever fashion, and if you don't mind what harm you do to society in the process, then "all" you need to do is to stir up hatred and suspicion and fear around the EU and what it does and those who like it, and find some slogans that appeal without requiring much actual thought, and so forth. Standard-issue populism.

But let's suppose you want universal basic income and you want it because you think it will help people and make society better. Then:

  • The "stir up fear and hatred" template doesn't work so well, because what you're doing isn't a thing that can readily be seen as fighting against a shared enemy.
  • The "stir up fear and hatred" template may be a really bad idea even if it works, because it may do more damage to society than the reform you're aiming for does good.
  • The details of what you do and how you do it may matter a lot: some versions of universal basic income might bankrupt the country, some might fail to do enough to solve the problems UBI is meant to solve, some might be politically unacceptable, etc. So you need to sell it in a way that lets a carefully designed version of UBI be what ends up happening.

The available evidence does not suggest (to me) that Cummings has a very specific version of Brexit in mind, or that he is sufficiently concerned for the welfare of the UK's society and the individuals within it to be troubled by considerations of societal harm done by the measures he's taken, or of whether he's ending up with a variety of Brexit that's net beneficial.

I would have preferred to say the foregoing without the last paragraph, which is kinda object-level political. But it's essential to the point. When Viliam says it's easier to burn a building down than construct it, I think he is saying something similar: if, as it seems may be the case, Cummings doesn't actually care whether he does a lot of harm to a lot of people, then he has selected an easier task than would be faced by someone trying to bring about major reforms without harming a lot of people, and the methods he's chosen are not necessarily ones that those who care about not harming a lot of people should emulate.

Comment by gjm on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-05T14:53:38.783Z · score: 7 (4 votes) · LW · GW

How clear is it that he specifically got all those things to happen? There were definitely other people involved, after all. Cummings's own account of what happened no doubt ascribes as much agency as possible to Cummings himself, but there are possible explanations for that other than its being true.

Comment by gjm on Theories That Can Explain Everything · 2020-01-02T13:40:16.310Z · score: 11 (4 votes) · LW · GW

Some "theories that can explain everything" may actually have the property that they can explain any individual observation but constrain what combinations of observations we can observe.

Consider, for instance, a vague but strongly adaptationist version of evolutionary psychology: it says that all features of human thought and behaviour have their origins in evolutionary advantage. Pretty much any specific feature of thought or behaviour can surely be given some sort of just-so-story explanation that will fit this theory, but it might be that feature 1 and feature 2 require mutually incompatible just-so stories, in which case the theory will forbid them both to occur; or at least that no one is able to come up with a plausibly-compatible pair of stories, in which case the theory will predict that features 1 and 2 are unlikely to occur together.

Arguably all theories are actually somewhat like this.

Comment by gjm on 2020's Prediction Thread · 2020-01-01T15:28:58.826Z · score: 2 (1 votes) · LW · GW

Fair enough! I suspect some low-probability predictions will be of that sort and some of the other, in which case there's no simple way to adjust for overconfidence.

Comment by gjm on 2020's Prediction Thread · 2020-01-01T03:23:06.342Z · score: 6 (4 votes) · LW · GW

I don't think you want to lower all predictions uniformly; some predictions here are stated with figures below 50%, for instance.

One better approach might be to reduce the log odds by some factor. If we pick 10% then we get substantially smaller changes than your proposal gives; maybe reduce the log odds by 25%? So if someone thinks X is 70% likely, that's 7:3 odds; we'd reduce that to (7:3)^0.75 which is the equivalent of a probability of about 65.4%. If they think X is 90% likely it would become 83.9%; if they think X is 50% likely, that wouldn't change at all.

(Arguably simpler but seems less natural to me: reduce proportionally not the probability but the difference from 50% of the probability. Say we reduce that by 25%; then 70% becomes 50% + 0.75*20% or 65%, quite similar to the fancy log-odds proposal in the previous paragraph. Things diverge more for more extreme probabilities: 90% turns into 50% + 0.75*40% or 80%, and 100% turns into 87.5% where the log-odds reduction leaves it unchanged.)

Comment by gjm on Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think · 2019-12-29T17:24:47.037Z · score: 6 (3 votes) · LW · GW

I think one could argue that "have you brought board games?" isn't really intended to include the insides of yet-unopened presents in its scope, in which case you weren't really lying.

(I'm not sure whether I would argue that. It might depend on whether Christmas Day was nearer the start or the end of your stay...)

Comment by gjm on Bayesian examination · 2019-12-17T15:40:13.717Z · score: 2 (1 votes) · LW · GW

Arguably the "natural" way to handle the possibility that you (the examiner) are in error is to score answers by (negative) KL-divergence from your own probability assignment. So if there are four options to which you assign probabilities p,q,r,s and a candidate says a,b,c,d then they get p log(a/p) + q log(b/q) + r log(c/r) + s log(d/s). If p=1 and q,r,s=0,0,0 then this is the same as giving them log a, i.e., the usual log-scoring rule. If p=1-3h and q,r,s=h,h,h then this is (1-3h) log (a/(1-3h)) + h log(b/h) + ..., which if we fix a is constant + h (log b + log c + log d) = constant + h log bcd, which by the AM-GM inequality is biggest when b=c=d.

This differs from the "expected log score" I described above only by an additive constant. One way to describe it is: the average amount of information the candidate would gain by adopting your probabilities instead of theirs, the average being taken according to your probabilities.

Comment by gjm on Evaluability (And Cheap Holiday Shopping) · 2019-12-17T14:39:56.414Z · score: 12 (3 votes) · LW · GW

The "two ice cream cups from Hsee (1988)" image is broken -- I think it was hosted on old-LW and has now gone away. So I found the paper and uploaded a new copy of the image to imgur. Here it is.

Comment by gjm on What Are Meetups Actually Trying to Accomplish? · 2019-12-16T15:04:18.872Z · score: 2 (1 votes) · LW · GW

I think the phrase "marginal people" is distracting in something a bit like the way "counterfactual people" is.

Comment by gjm on ialdabaoth is banned · 2019-12-13T17:37:03.092Z · score: 17 (7 votes) · LW · GW

Maybe I'm confused about what you mean by "the personal stuff". My impression is that what I would consider "the personal stuff" is central to why ialdabaoth is considered to pose an epistemic threat: he has (allegedly) a history of manipulation which makes it more likely that any given thing he writes is intended to deceive or manipulate. Which is why jimrandomh said:

The problem is, I think this post may contain a subtle trap, and that understanding its author, and what he was trying to do with this post, might actually be key to understanding what the trap is.

and, by way of explanation of why "this post may contain a subtle trap", a paragraph including this:

So he created narratives to explain why those conversations were so confusing, why he wouldn't follow the advice, and why the people trying to help him were actually wronging him, and therefore indebted. This post is one such narrative.

Unless I'm confused, (1) this is not "a somewhat standard LW critique of a LW post" because most such critiques don't allege that the thing critiqued is likely to contain subtle malignly-motivated traps, and (2) the reason for taking it seriously is "the personal stuff".

Who's saying, in what sense, that "the personal stuff were separate"?

Comment by gjm on ialdabaoth is banned · 2019-12-13T16:24:23.823Z · score: 4 (2 votes) · LW · GW

Why would it make sense to "exclude the personal stuff"? Isn't the personal stuff the point here?

Comment by gjm on Bayesian examination · 2019-12-12T16:26:43.722Z · score: 4 (2 votes) · LW · GW

Sure, 2 knows something 1 doesn't; e.g., 2 knows more about how unlikely B is. But, equally, 1 knows something 2 doesn't; e.g., 1 knows more than 2 about how unlikely C is.

In the absence of any reason to think one of these is more important than the other, it seems reasonable to think that different probability assignments among the various wrong answers are equally meritorious and should result in equal scores.

... Having said that, here's an argument (which I'm not sure I believe) for favouring more-balanced probability assignments to the wrong answers. We never really know that the right answer is 100:0:0:0. We could, conceivably, be wrong. And, by hypothesis, we don't know of any relevant differences between the "wrong" answers. So we should see all the wrong answers as equally improbable but not quite certainly wrong. And if, deep down, we believe in something like the log scoring rule, then we should notice that a candidate who assigns a super-low probability to one of those "wrong" answers is going to do super-badly in the very unlikely case that it's actually right after all.

So, suppose we believe in the log scoring rule, and we think the correct answer is the first one. But we admit a tiny probability h for each of the others being right. Then a candidate who gives probabilities a,b,c,d has an expected score of (1-3h) log a + h (log b + log c + log d). Suppose one candidate says 0.49,0.49,0.01,0.01 and the other says 0.4,0.2,0.2,0.2; then we will prefer the second over the first if h is bigger than about 0.0356. In a typical educational context that's unlikely so we should prefer the first candidate. Now suppose one says 0.49,0.49,0.01,0.01 and the other says 0.49,0.25,0.25,0.01; we should always prefer the second candidate.

None of this means that the Brier score is the right way to prefer the second candidate over the first; it clearly isn't, and if h is small enough then of course the correction to the naive log score is also very small, provided candidates' probability assignments are bounded away from zero.

In practice, hopefully h is extremely small. And some wrong answers will be wronger than others and we don't want to reward candidates for not noticing that, but we probably also don't want the extra pain of figuring out just how badly wrong all the wrong answers are, and that is my main reason for thinking it's better to use a scoring rule that doesn't care what probabilities candidates assigned to the wrong answers.

Comment by gjm on Why the tails come apart · 2019-12-11T21:09:49.094Z · score: 4 (2 votes) · LW · GW

I can do n=1 (the probability is 1, obviously) and n=2 (the probability is , not so obviously). n=3 and up seem harder, and my pattern-spotting skills are not sufficient to intuit the general case from those two :-).

Comment by gjm on Changing main content font to Valkyrie? · 2019-12-11T20:18:27.890Z · score: 8 (2 votes) · LW · GW

It renders pretty decently (though I don't love the actual letterforms, at least not at sizes appropriate for body text) for me both on my laptop, which has a high-DPI display (4k at 15"), and on my desktop machine at work, which has a much lower-DPI display (2560x1600 at 30"). It is a bit spindly on the high-DPI display, but not so badly so as to make the text hard to read at any sensible size. (I've checked these things on both Firefox and Chrome.)

Comment by gjm on Antimemes · 2019-12-11T20:05:37.103Z · score: 2 (1 votes) · LW · GW

Pedantic note: you should do

#define ifnot(x) if (!(x))

(with extra parens around the second instance of "x") because otherwise you will get probably-unexpected results if x is something like a==b.

Comment by gjm on Antimemes · 2019-12-11T15:38:02.083Z · score: 3 (2 votes) · LW · GW

I remain unconvinced that Lisp's macro facilities are antimemetic. I am not sure exactly how to divide up my disagreement between "it's not antimemetic, it's something more mundane" and "I don't think you should use the exotic-sounding term 'antimeme' for something so mundane" because I am not exactly certain where you draw the boundaries of the term "antimeme".

At any rate, I think the main reasons why Lisp isn't more popular are fairly mundane and don't need an exotic-sounding term to describe them. I think (though I confess I don't have solid evidence) that:

  • most programmers who don't use Lisp never even really look at it;
  • most who look at it but still don't use it are mostly put off by superficial things like the unusual syntax or the Lisp community's reputation for smugness;
  • most who get past those things but still don't use it are mostly put off by genuine drawbacks like the relatively poor selection of libraries available or the difficulty of finding good Lispers.

So if Lisp is tragically underused then some of the tragic underuse will be the result of defmacro (and also, I suggest, set-macro-character and set-dispatch-macro-character) being underappreciated, but I find it very hard to believe that it's a large fraction. And even that doesn't all qualify as antimemetic in the sense of "provoking a self-suppressing response", again unless you're meaning that so broadly that it covers everything whose merits are easy to underestimate. If someone is (say) used to C-preprocessor-style macros, and hears that expert Lisp programming makes a lot of use of macros, and imagines a codebase full of #defines, are you calling that antimemetic? Or if they don't jump to that conclusion but do think something like "hmm, macros aren't really all that powerful, so a language where macros provide a lot of the power must be pretty weak", are you calling that antimemetic?

I think the mental process you have in mind is different from both of those, because you're taking "macro" to include e.g. C's "if" and "for", even though those aren't implemented with any sort of macrology in C, so the reaction you're describing is more like "these people say that in Lisp you can redefine the language itself, but that doesn't make any sense". Again I have no concrete evidence, but I don't think that's a common reaction. After all, C does have macros (of a sort) and you can use them to extend the language (kinda), and users of Ruby, which is at least mainstream-ish, are used to using and making what they call "domain-specific languages" within Ruby even though what you can do there is fairly restrictive compared with Lisp's macros.

I think you're overdramatizing how people react to Lisp. "Lisp is an antimeme!" makes a great story, but I don't think it fits the evidence as well as "Lisp is unfamiliar in various ways, and people are bad at seeing the merits of unfamiliar things".

... Again, unless that or something like it is actually all you mean by "antimeme".

It seems, at least some of the time, as if you're using the term to describe anything whose widespread appreciation is limited by the fact that widespread beliefs or habits of mind get in the way. (Assuming that programming languages can't have adjustable syntax; seeing a clear boundary between one's self and the rest of the universe; being willing to have a boss and do what they say.) With that definition, I'm still not sure Lisp counts (as I said above, I think there are other reasons why many programmers don't use Lisp, and they aren't all antimemetic even in this very broad sense), but there's certainly a case to be made that it does -- but to me, "antimeme" doesn't seem like an appropriate term, for two reasons.

  • It implies that the offputting-ness lies primarily in the thing itself, when in fact it seems much better to see it as a property of the people being put off.
  • It makes "being an antimeme" sound like some exotic SCP-ish property, when in fact a large fraction of underappreciated things are "antimemes" in this sense.
Comment by gjm on Is Rationalist Self-Improvement Real? · 2019-12-10T10:37:48.323Z · score: 6 (3 votes) · LW · GW

I don't know what the actual causal story is here, but it's at any rate not obviously right that if doctors were good at it then there'd be no reason to increase the age, for a few reasons.

  • Changing the age doesn't say anything about who's how good at what, it says that something has changed.
  • What's changed could be that doctors have got worse at breast cancer diagnosis, or that we've suddenly discovered that they're bad. But it could also be, for instance:
    • That patients have become more anxious and therefore (1) more harmed directly by a false-positive result and (2) more likely to push their doctors for further procedures that would in expectation be bad for them.
    • That we've got better or worse, or discovered we're better or worse than we thought, at treating certain kinds of cancers at certain stages, in a way that changes the cost/benefit analysis around finding things earlier.
      • E.g., I've heard it said (but I don't remember by whom and it might be wrong, so this is not health advice) that the benefits of catching cancers early are smaller than they used to be thought to be, because actually the reason why earlier-caught cancers kill you less is that ones you catch when they're smaller are more likely to be slower-growing ones that were less likely to kill you whenever you caught them; if that's true and a recent discovery then it would suggest reducing the amount of screening you do.
    • That previous protocols were designed without sufficient attention to the downsides of testing.
Comment by gjm on Antimemes · 2019-12-09T18:03:02.483Z · score: 13 (3 votes) · LW · GW

This article gives three examples of alleged antimemes: Lisp, entrepreneurship, and "stream entry". What it doesn't give is any good reason to think that they are antimemes. (I take it "antimeme" here means something stronger than "thing whose advantages aren't very obvious" or "thing that's unusual".)

I think Lisp is widely ignored not (only?) because it's an antimeme but because

  • languages that happen already to be widely used are easier to discover, have more tutorial materials around, have a better selection of libraries for useful or fun things, are likelier to help you get a job, are easier to hire other people for, etc., etc., etc. So once a language is niche-y it's likely to stay niche-y unless some unusual thing happens to propel it into the limelight.
  • other languages these days offer a lot of the advantages that used to be distinctively Lispy. For instance, exploratory programming is much easier if you have a REPL, a decent selection of built-in types, decent notation for objects of those types, and either dynamic typing or good type inference; this makes Lisp much better for that sort of thing than Fortran or C, but doesn't distinguish Lisp from Python or Javascript.

One other thing that maybe helps Lisp stay relatively obscure, its unusual syntax, does have something of the antimeme about it. At first glance the syntax looks horrible, and that makes it hard to see much else about the language until you've put in a little effort.

I don't think entrepreneurship is widely ignored at all; I think it's very widely admired. Still, most people aren't entrepreneurs; I think that's just plain ordinary risk aversion. (And maybe recognition that successful entrepreneurship requires skills and personality traits that not everyone has.) In any case, entrepreneurship is very far from being invisible in our culture.

(I don't know enough about Buddhism, meditation, altered states of consciousness, etc., to have any useful opinion about whether "stream entry" is an antimeme.)

So what allegedly makes these things antimemes, in lsusr's view? I'll hazard a guess: they are things whose merits seem clear to lsusr but that are widely neglected by others. I suggest, though, that this characteristic doesn't pick out something special about those things. People are just really bad at seeing the merits of things they aren't already in favour of. I'll give a few examples.

  • Presumably some religious or irreligious view is correct. (Like most people around here I think what one might call "atheistic scientific naturalism" is the best candidate, but my point here doesn't change if it turns out to be something quite different.) Call the correct view V. Whatever it turns out to be, the great majority of people are not adherents of V. If V is any of the more likely candidates, the great majority of people have at least had a reasonable chance to be exposed to V. Is V an antimeme? No, people just suck at considering alternative worldviews.
  • Some people think that it's a good idea to live super-frugally in your twenties and thirties, so as to build up financial reserves that let you retire very early (which doesn't necessarily mean never doing paid work again, but means not needing to). Very few people actually do this. Perhaps it's just a bad idea for one reason or another (e.g., maybe you can only actually do it effectively if you're unusually well paid and get lucky with the stock market), but if in fact it's a good idea, is it an antimeme? No, people just suck at considering alternative lifestyles.
  • Lots of people use computers running Windows. Quite a lot use computers running macOS. Some use computers running Linux. Most of these people will tell you that their preferred system is Just Better, at least for their needs. I bet a lot of them are wrong, one way or another. If so, are all these OSes antimemes? No, people just suck at considering alternatives.
Comment by gjm on Long Bets by Confidence Level · 2019-12-09T16:17:53.398Z · score: 24 (7 votes) · LW · GW

1. The calculation here seems to consider only

  • what would happen to your money if you don't make the bet
  • what would happen to your and your counterparty's money if you do and you win

but (1) if you don't place the bet then presumably your counterparty will do something with that money (which might have value or disvalue from your perspective) and (2) if you place the bet and lose then all the money goes to your counterparty's chosen charity (which almost certainly will have value or disvalue from your perspective).

Unless your expectation is that your counterparty's chosen charity has negligible effectiveness (for good or bad) relative to yours, it seems to me that this calculation is unlikely to be the one you actually want to do.

2. My impression (which could be very wrong) is that making a Long Bet is usually at least as much about raising publicity as it is about directing money where you'd like it to go. It's a thing pairs of people do when they want to get everyone talking about the issue about which they disagree. If I'm right about this, then in many cases the final disbursement of money is likely to matter less than the consciousness-raising.

Comment by gjm on The Epsilon Fallacy · 2019-12-07T15:38:27.911Z · score: 14 (4 votes) · LW · GW

It says that shifts between fossil fuels are about half the decrease (ignoring the counterfactual one, which obviously is highly dependent on the rather arbitrary choice of expected growth rate). I don't know whether that's all fracking, and perhaps it's hard to unpick all the possible reasons for growth of natural gas at the expense of coal. My guess is that even without fracking there'd have been some shift from coal to gas.

The thesis here -- which seems to be very wrong -- was: "Practically all the carbon reduction over the past decade has come from the natgas transition". And it needed to be that, rather than something weaker-and-truer like "Substantially the biggest single element in the carbon reduction has been the natgas transition", because the more general thesis is that here, and in many other places, one approach so dominates the others that working on anything else is a waste of time.

I appreciate that you wrote the OP and I didn't, so readers may be inclined to think I must be wrong. Here are some quotations to make it clear how consistently the message of the OP is "only one thing turned out to matter and we should expect that to be true in the future too".

  • "it ain’t a bunch of small things adding together"
  • "Practically all of the reduction in US carbon emissions over the past 10 years has come from that shift"
  • "all these well-meaning, hard-working people were basically useless"
  • "PV has been an active research field for thousands of academics for several decades. They’ve had barely any effect on carbon emissions to date"
  • "one wedge will end up a lot more effective than all others combined. Carbon emission reductions will not come from a little bit of natgas, a little bit of PV, a little bit of many other things"

All of those appear to be wrong. (Maybe they were right when the OP was written, but if so then they became wrong shortly after, which may actually be worse for the more general thesis of the OP since it indicates how badly wrong one can be in evaluating what measures are going to be effective in the near future.)

Now, of course you could instead make the very different argument that if Thing A is more valuable per unit effort than Thing B then we should pour all our resources into Thing A. But that is, in fact, a completely different argument; I think it's wrong for several reasons, but in any case it isn't the argument in the OP and the arguments in the OP don't support it much.

The questions you ask at the end seem like their answers are supposed to be obvious, but they aren't at all obvious to me. Would natural gas subsidies have had the same sort of effect as solar and wind subsidies? Maaaaybe, but also maybe not: I assume most of the move from coal to gas was because gas became genuinely cheaper, and the point of solar and wind subsidies was mostly that those weren't (yet?) cheaper but governments wanted to encourage them (1) to get the work done that would make them cheaper and (2) for the sake of the environmental benefits. Would campaigning for natural gas subsidies have had the same sort of effect as campaigning for solar and wind? Maaaaybe, but also maybe not: campaigning works best when people can be inspired by your campaigning; "energy productions with emissions close to zero" is a more inspiring thing than "energy productions with a ton of emissions, but substantially less than what we've had before", and the most likely people to be inspired by this sort of thing are environmentalists, who are generally unlikely to be inspired by fracking.

Comment by gjm on The Epsilon Fallacy · 2019-12-06T16:13:40.345Z · score: 12 (3 votes) · LW · GW

According to this webpage from the US Energy Information Administration, CO2 emissions from US energy generation went down 28% between 2005 and 2017, and they split that up as follows:

  • 329 MMmt reduction from switching between fossil fuels
  • 316 MMmt reduction from introducing noncarbon energy sources

along with

  • 654 MMmt difference between actual energy demand in 2018 and what it would have been if demand had grown at the previously-expected ~2% level between 2005 and 2018 (instead it remained roughly unchanged)

If this is correct, I think it demolishes the thesis of this article:

  • The change from coal to natural gas obtained by fracking does not dominate the reductions in CO2 emissions.
  • There have been substantial reductions as a result of introducing new "sustainable" energy sources like solar and wind.
  • There have also been substantial reductions as a result of reduced energy demand; presumably this is the result of a combination of factors like more efficient electronic devices, more of industry being in less-energy-hungry sectors (shifting from hardware to software? from manufacturing to services?), changing social norms that reduce energy consumption, etc.
  • So it doesn't, after all, seem as if people wanting to have a positive environmental impact who chose to do it by working on solar power, political change, etc., were wasting their time and should have gone into fracking instead. Not even if they had been able to predict magically that fracking would be both effective and politically feasible.
Comment by gjm on Tapping Out In Two · 2019-12-05T23:47:24.594Z · score: 4 (2 votes) · LW · GW

What I've usually done in such situations is to reply to the last message and say something like "I'm not planning to continue this discussion; please feel free to have the last word. If there's something further that you particularly want a response to, say so and I'll respond, but then that's it."

I think "at most two more replies" is probably better, not least because you can say it more briefly.

Comment by gjm on CO2 Stripper Postmortem Thoughts · 2019-12-05T01:58:04.562Z · score: 3 (2 votes) · LW · GW

I'm afraid I still don't understand what the basis is for your claim that "the premise that CO2 affects cognition is false".

I understand why you consider it not clear that CO2 does affect cognition: experiments yield results in different directions, and people survive on submarines. But that, at least so far as you've described it, seems to fall far short of justifying the flat statement that "the premise is false". What am I missing?

Comment by gjm on CO2 Stripper Postmortem Thoughts · 2019-12-02T17:24:28.195Z · score: 7 (3 votes) · LW · GW

Your statement that "the premise that CO2 affects cognition is false" seems not obviously correct. Is this the current expert consensus? How can the rest of us evaluate it?

Comment by gjm on "The Bitter Lesson", an article about compute vs human knowledge in AI · 2019-12-02T17:22:55.363Z · score: 2 (1 votes) · LW · GW

MuZero seems to deserve to be called domain-agnostic more than AlphaZero does, yes.

(For anyone else who doesn't immediately recognize the abbreviation: ALE is the "Arcade Learning Environment".)

Comment by gjm on CO2 Stripper Postmortem Thoughts · 2019-12-01T20:33:34.385Z · score: 5 (3 votes) · LW · GW

You say it works but not as well as hoped. It would be interesting to know more about that.

E.g., how effective is it, in the end, at removing from the air? (Less so than you hoped?) How big and power-hungry and noisy is it? (More so than you hoped?) How much did it end up costing to make? (More than you hoped?)

Comment by gjm on My Anki patterns · 2019-11-29T13:36:10.530Z · score: 3 (2 votes) · LW · GW

OP refers to "Alex Vermeer’s free book Anki Essentials" but so far as I can tell Anki Essentials is not free; it costs about $5 (exact price depending on whether you get it as PDF or as an Amazon ebook).

Comment by gjm on Market Rate Food Is Luxury Food · 2019-11-29T11:13:18.065Z · score: 2 (1 votes) · LW · GW

I agree. That's why I listed those two issues (1. the spoof argument might not be a good analogy for real arguments about housing; 2. the spoof argument isn't obviously wrong) separately.

Comment by gjm on Market Rate Food Is Luxury Food · 2019-11-28T10:14:40.474Z · score: 9 (2 votes) · LW · GW

Thanks! Here are a couple of relevant extracts for anyone else who didn't know the same things as I didn't know. First, what it is:

Section 8 of the Housing Act of 1937 [...] authorizes the payment of rental housing assistance to private landlords on behalf of low-income households in the United States. Of the 5.2 million American households that received rental assistance in 2018, approximately 1.2 million of those households received a Section 8 based voucher.

Second, those waiting lists:

In many localities, the PHA waiting lists for Section 8 vouchers may be thousands of families long, waits of three to six years to obtain vouchers is common, and many lists are closed to new applicants. Wait lists are often briefly opened (often for just five days), which may occur as little as once every seven years. Some PHAs use a "lottery" approach, where there can be as many as 100,000 applicants for 10,000 spots on the waitlist, with spots being awarded on the basis of weighted or non-weighted lotteries, with priority sometimes given to local residents, the disabled, veterans, and the elderly.
Comment by gjm on Market Rate Food Is Luxury Food · 2019-11-28T02:09:31.469Z · score: 2 (1 votes) · LW · GW

It's hard to tell whether the arguments are "actually analogous" because ...

  • The spoof-argument about food, in the OP here, leaves lots of things implicit. (E.g., "With deregulation, farmers would massively shift to luxury crops, and we would have shortages of bread, milk, eggs, and other staples"; it doesn't go into details about why this would allegedly happen.) So we don't know what the parallel argument about housing actually says.
  • The parallel argument about housing leaves everything implicit, in that we don't actually know what it is. Jeff hasn't (so far as I know) pointed at a specific pro-housing-regulation article and copied its arguments, he's provided a bunch of food-arguments that supposedly parallel common housing-arguments. So what's "the argument" here?

I think it's reasonable to suspect that they aren't "actually analogous" in sufficient detail that if one is wrong then the other is too because ...

  • They depend on all sorts of details about the world that there's no particular reason to expect behave the same way in the food and housing cases. E.g., is a (fictional) several-year waiting list for SNAP equivalent to a several-year waiting list for, er, whatever housing thing this is meant to be parallel to? It might be, but maybe not; the timescales on which hunger and homelessness happen aren't exactly the same, after all, nor are the timescales associated with normally-functional food-buying and house-buying, and if I try to imagine mechanisms leading to several-year waiting lists for food assistance and for housing assistance, it's not clear to me that I should expect them to be similar. (Hence, the prospects for fixing them might differ.)

And I don't understand why you are so sure that if the arguments are analogous then "this shows that one of them is wrong". Normally, when that sort of thing is true it's because the conclusions of the two arguments are incompatible, but that doesn't seem to be the case here. Perhaps you mean "this shows that the one about housing is wrong" because you find it obvious that the one about food is wrong (though in this case I am not sure why you said "one of them", which seems wrong on Gricean grounds), but I don't find that convincing because

  • The argument about food is liable to seem obviously wrong simply because it's based on a world that is clearly quite different from ours in implausible-seeming ways.
  • If I leave aside the fact that the things it says about food are in fact false in our world, it's no more obviously wrong (to me) than the argument about housing that it's meant to be undermining by its more-obvious wrongness. In some hypothetical world where food is highly regulated and unaffordably expensive, would it be the case that deregulating it would bring prices down to the levels we see in our world? Are you sure you aren't just assuming that since Jeff has described a world that differs from ours in those two respects, the regulation must be the cause of the cost?
Comment by gjm on Mental Mountains · 2019-11-27T12:35:43.184Z · score: 4 (3 votes) · LW · GW

I expect it's common for people to say (or at least be in a position to say truly, if they chose) "I know that climate change is real, but for some reason I can't persuade myself not to vote Republican". In some cases that will be because they like the Republicans' other policies, in which case there isn't necessarily an actual "valley" here. But party loyalty is a thing, and I guarantee there are people who could truly say "I know that Party X's actual policies are a better match for my values, but I can't bring myself to vote for them rather than for Party Y".

Comment by gjm on Market Rate Food Is Luxury Food · 2019-11-23T18:09:10.563Z · score: 12 (5 votes) · LW · GW

It's not at all clear to me that housing and food are similar enough for this analogy to work. It seems to me that I can totally imagine a world in which the argument in the initial part of your post is right, and for that matter I can also imagine a world in which the corresponding argument about housing is right; whether either of them actually is right depends on details that needn't be the same in the two cases.

So the implicit argument here (if I'm understanding right) -- "some people say that to solve our housing problems it isn't enough to build more houses, so we should prioritize building affordable housing or something instead; here's an analogous thing people might say about food, which is obviously silly; likewise, saying these things about housing is silly, so the main thing we need to do is to build more houses regardless of exactly what they are" -- doesn't work for me. It's not obvious enough that the food version of the argument is wrong, nor is it obvious enough that if one is wrong then the other is too.

(I do tend to agree that building more housing is much the most important thing to do to address the difficulties many many many people have in affording somewhere to live, so my unconvincedness here isn't the result of not liking the conclusion.)

Comment by gjm on How I do research · 2019-11-20T17:42:34.116Z · score: 5 (5 votes) · LW · GW

Boldface the first few words.

Comment by gjm on How I do research · 2019-11-20T13:32:05.366Z · score: 4 (2 votes) · LW · GW

For what it's worth, I'm with Said rather than Zack on this one.

(It would make more sense if these initial letters were associated with a mnemonic or something; then there would be a reason for emphasizing a bunch of first letters. But it seems to have been done just for, I dunno, fun.)

Comment by gjm on Goal-thinking vs desire-thinking · 2019-11-17T20:27:30.415Z · score: 2 (1 votes) · LW · GW

I don't know for sure whether we're really disagreeing. Perhaps that's a question with no definite answer; the question's about where best to draw the boundary of an only-vaguely-defined term. But it seems like you're saying "goal-thinking must only be concerned with goals that don't involve people's happiness" and I'm saying I think that's a mistake and that the fundamental distinction is between doing something as part of a happiness-maximizing process and recognizing the layer of indirection in that and aiming at goals we can see other reasons for, which may or may not happen to involve our or someone else's happiness.

Obviously you can choose to focus only on goals that don't involve happiness in any way at all, and maybe doing so makes some of the issues clearer. But I don't think "involving happiness" / "not involving happiness" is the most fundamental criterion here; the distinction is actually, as your original terminology makes clear, between different modes of thinking.