## Posts

"Future of Go" summit with AlphaGo 2017-04-10T11:10:40.249Z · score: 3 (4 votes)
AlphaGo versus Lee Sedol 2016-03-09T12:22:53.237Z · score: 19 (19 votes)
[LINK] "The current state of machine intelligence" 2015-12-16T15:22:26.596Z · score: 3 (4 votes)
[LINK] Scott Aaronson: Common knowledge and Aumann's agreement theorem 2015-08-17T08:41:45.179Z · score: 15 (15 votes)
Group Rationality Diary, March 22 to April 4 2015-03-23T12:17:27.193Z · score: 6 (7 votes)
Group Rationality Diary, March 1-21 2015-03-06T15:29:01.325Z · score: 4 (5 votes)
Proportional Giving 2014-03-02T21:09:07.597Z · score: 6 (14 votes)
[Link] False memories of fabricated political events 2013-02-10T22:25:15.535Z · score: 17 (20 votes)
[LINK] Breaking the illusion of understanding 2012-10-26T23:09:25.790Z · score: 19 (20 votes)
The Problem of Thinking Too Much [LINK] 2012-04-27T14:31:26.552Z · score: 7 (11 votes)
Harry Potter and the Methods of Rationality discussion thread, part 4 2010-10-07T21:12:58.038Z · score: 5 (7 votes)
The uniquely awful example of theism 2009-04-10T00:30:08.149Z · score: 38 (48 votes)
Voting etiquette 2009-04-05T14:28:31.031Z · score: 10 (16 votes)

Comment by gjm on The Epsilon Fallacy · 2019-12-07T15:38:27.911Z · score: 14 (4 votes) · LW · GW

It says that shifts between fossil fuels are about half the decrease (ignoring the counterfactual one, which obviously is highly dependent on the rather arbitrary choice of expected growth rate). I don't know whether that's all fracking, and perhaps it's hard to unpick all the possible reasons for growth of natural gas at the expense of coal. My guess is that even without fracking there'd have been some shift from coal to gas.

The thesis here -- which seems to be very wrong -- was: "Practically all the carbon reduction over the past decade has come from the natgas transition". And it needed to be that, rather than something weaker-and-truer like "Substantially the biggest single element in the carbon reduction has been the natgas transition", because the more general thesis is that here, and in many other places, one approach so dominates the others that working on anything else is a waste of time.

I appreciate that you wrote the OP and I didn't, so readers may be inclined to think I must be wrong. Here are some quotations to make it clear how consistently the message of the OP is "only one thing turned out to matter and we should expect that to be true in the future too".

• "it ain’t a bunch of small things adding together"
• "Practically all of the reduction in US carbon emissions over the past 10 years has come from that shift"
• "all these well-meaning, hard-working people were basically useless"
• "PV has been an active research field for thousands of academics for several decades. They’ve had barely any effect on carbon emissions to date"
• "one wedge will end up a lot more effective than all others combined. Carbon emission reductions will not come from a little bit of natgas, a little bit of PV, a little bit of many other things"

All of those appear to be wrong. (Maybe they were right when the OP was written, but if so then they became wrong shortly after, which may actually be worse for the more general thesis of the OP since it indicates how badly wrong one can be in evaluating what measures are going to be effective in the near future.)

Now, of course you could instead make the very different argument that if Thing A is more valuable per unit effort than Thing B then we should pour all our resources into Thing A. But that is, in fact, a completely different argument; I think it's wrong for several reasons, but in any case it isn't the argument in the OP and the arguments in the OP don't support it much.

The questions you ask at the end seem like their answers are supposed to be obvious, but they aren't at all obvious to me. Would natural gas subsidies have had the same sort of effect as solar and wind subsidies? Maaaaybe, but also maybe not: I assume most of the move from coal to gas was because gas became genuinely cheaper, and the point of solar and wind subsidies was mostly that those weren't (yet?) cheaper but governments wanted to encourage them (1) to get the work done that would make them cheaper and (2) for the sake of the environmental benefits. Would campaigning for natural gas subsidies have had the same sort of effect as campaigning for solar and wind? Maaaaybe, but also maybe not: campaigning works best when people can be inspired by your campaigning; "energy productions with emissions close to zero" is a more inspiring thing than "energy productions with a ton of emissions, but substantially less than what we've had before", and the most likely people to be inspired by this sort of thing are environmentalists, who are generally unlikely to be inspired by fracking.

Comment by gjm on The Epsilon Fallacy · 2019-12-06T16:13:40.345Z · score: 12 (3 votes) · LW · GW

According to this webpage from the US Energy Information Administration, CO2 emissions from US energy generation went down 28% between 2005 and 2017, and they split that up as follows:

• 329 MMmt reduction from switching between fossil fuels
• 316 MMmt reduction from introducing noncarbon energy sources

along with

• 654 MMmt difference between actual energy demand in 2018 and what it would have been if demand had grown at the previously-expected ~2% level between 2005 and 2018 (instead it remained roughly unchanged)

If this is correct, I think it demolishes the thesis of this article:

• The change from coal to natural gas obtained by fracking does not dominate the reductions in CO2 emissions.
• There have been substantial reductions as a result of introducing new "sustainable" energy sources like solar and wind.
• There have also been substantial reductions as a result of reduced energy demand; presumably this is the result of a combination of factors like more efficient electronic devices, more of industry being in less-energy-hungry sectors (shifting from hardware to software? from manufacturing to services?), changing social norms that reduce energy consumption, etc.
• So it doesn't, after all, seem as if people wanting to have a positive environmental impact who chose to do it by working on solar power, political change, etc., were wasting their time and should have gone into fracking instead. Not even if they had been able to predict magically that fracking would be both effective and politically feasible.
Comment by gjm on Tapping Out In Two · 2019-12-05T23:47:24.594Z · score: 4 (2 votes) · LW · GW

What I've usually done in such situations is to reply to the last message and say something like "I'm not planning to continue this discussion; please feel free to have the last word. If there's something further that you particularly want a response to, say so and I'll respond, but then that's it."

I think "at most two more replies" is probably better, not least because you can say it more briefly.

Comment by gjm on CO2 Stripper Postmortem Thoughts · 2019-12-05T01:58:04.562Z · score: 3 (2 votes) · LW · GW

I'm afraid I still don't understand what the basis is for your claim that "the premise that CO2 affects cognition is false".

I understand why you consider it not clear that CO2 does affect cognition: experiments yield results in different directions, and people survive on submarines. But that, at least so far as you've described it, seems to fall far short of justifying the flat statement that "the premise is false". What am I missing?

Comment by gjm on CO2 Stripper Postmortem Thoughts · 2019-12-02T17:24:28.195Z · score: 7 (3 votes) · LW · GW

Your statement that "the premise that CO2 affects cognition is false" seems not obviously correct. Is this the current expert consensus? How can the rest of us evaluate it?

Comment by gjm on "The Bitter Lesson", an article about compute vs human knowledge in AI · 2019-12-02T17:22:55.363Z · score: 2 (1 votes) · LW · GW

MuZero seems to deserve to be called domain-agnostic more than AlphaZero does, yes.

(For anyone else who doesn't immediately recognize the abbreviation: ALE is the "Arcade Learning Environment".)

Comment by gjm on CO2 Stripper Postmortem Thoughts · 2019-12-01T20:33:34.385Z · score: 5 (3 votes) · LW · GW

You say it works but not as well as hoped. It would be interesting to know more about that.

E.g., how effective is it, in the end, at removing from the air? (Less so than you hoped?) How big and power-hungry and noisy is it? (More so than you hoped?) How much did it end up costing to make? (More than you hoped?)

Comment by gjm on My Anki patterns · 2019-11-29T13:36:10.530Z · score: 3 (2 votes) · LW · GW

OP refers to "Alex Vermeer’s free book Anki Essentials" but so far as I can tell Anki Essentials is not free; it costs about 5 (exact price depending on whether you get it as PDF or as an Amazon ebook). Comment by gjm on Market Rate Food Is Luxury Food · 2019-11-29T11:13:18.065Z · score: 2 (1 votes) · LW · GW I agree. That's why I listed those two issues (1. the spoof argument might not be a good analogy for real arguments about housing; 2. the spoof argument isn't obviously wrong) separately. Comment by gjm on Market Rate Food Is Luxury Food · 2019-11-28T10:14:40.474Z · score: 9 (2 votes) · LW · GW Thanks! Here are a couple of relevant extracts for anyone else who didn't know the same things as I didn't know. First, what it is: Section 8 of the Housing Act of 1937 [...] authorizes the payment of rental housing assistance to private landlords on behalf of low-income households in the United States. Of the 5.2 million American households that received rental assistance in 2018, approximately 1.2 million of those households received a Section 8 based voucher. Second, those waiting lists: In many localities, the PHA waiting lists for Section 8 vouchers may be thousands of families long, waits of three to six years to obtain vouchers is common, and many lists are closed to new applicants. Wait lists are often briefly opened (often for just five days), which may occur as little as once every seven years. Some PHAs use a "lottery" approach, where there can be as many as 100,000 applicants for 10,000 spots on the waitlist, with spots being awarded on the basis of weighted or non-weighted lotteries, with priority sometimes given to local residents, the disabled, veterans, and the elderly. Comment by gjm on Market Rate Food Is Luxury Food · 2019-11-28T02:09:31.469Z · score: 2 (1 votes) · LW · GW It's hard to tell whether the arguments are "actually analogous" because ... • The spoof-argument about food, in the OP here, leaves lots of things implicit. (E.g., "With deregulation, farmers would massively shift to luxury crops, and we would have shortages of bread, milk, eggs, and other staples"; it doesn't go into details about why this would allegedly happen.) So we don't know what the parallel argument about housing actually says. • The parallel argument about housing leaves everything implicit, in that we don't actually know what it is. Jeff hasn't (so far as I know) pointed at a specific pro-housing-regulation article and copied its arguments, he's provided a bunch of food-arguments that supposedly parallel common housing-arguments. So what's "the argument" here? I think it's reasonable to suspect that they aren't "actually analogous" in sufficient detail that if one is wrong then the other is too because ... • They depend on all sorts of details about the world that there's no particular reason to expect behave the same way in the food and housing cases. E.g., is a (fictional) several-year waiting list for SNAP equivalent to a several-year waiting list for, er, whatever housing thing this is meant to be parallel to? It might be, but maybe not; the timescales on which hunger and homelessness happen aren't exactly the same, after all, nor are the timescales associated with normally-functional food-buying and house-buying, and if I try to imagine mechanisms leading to several-year waiting lists for food assistance and for housing assistance, it's not clear to me that I should expect them to be similar. (Hence, the prospects for fixing them might differ.) And I don't understand why you are so sure that if the arguments are analogous then "this shows that one of them is wrong". Normally, when that sort of thing is true it's because the conclusions of the two arguments are incompatible, but that doesn't seem to be the case here. Perhaps you mean "this shows that the one about housing is wrong" because you find it obvious that the one about food is wrong (though in this case I am not sure why you said "one of them", which seems wrong on Gricean grounds), but I don't find that convincing because • The argument about food is liable to seem obviously wrong simply because it's based on a world that is clearly quite different from ours in implausible-seeming ways. • If I leave aside the fact that the things it says about food are in fact false in our world, it's no more obviously wrong (to me) than the argument about housing that it's meant to be undermining by its more-obvious wrongness. In some hypothetical world where food is highly regulated and unaffordably expensive, would it be the case that deregulating it would bring prices down to the levels we see in our world? Are you sure you aren't just assuming that since Jeff has described a world that differs from ours in those two respects, the regulation must be the cause of the cost? Comment by gjm on Mental Mountains · 2019-11-27T12:35:43.184Z · score: 4 (3 votes) · LW · GW I expect it's common for people to say (or at least be in a position to say truly, if they chose) "I know that climate change is real, but for some reason I can't persuade myself not to vote Republican". In some cases that will be because they like the Republicans' other policies, in which case there isn't necessarily an actual "valley" here. But party loyalty is a thing, and I guarantee there are people who could truly say "I know that Party X's actual policies are a better match for my values, but I can't bring myself to vote for them rather than for Party Y". Comment by gjm on Market Rate Food Is Luxury Food · 2019-11-23T18:09:10.563Z · score: 12 (5 votes) · LW · GW It's not at all clear to me that housing and food are similar enough for this analogy to work. It seems to me that I can totally imagine a world in which the argument in the initial part of your post is right, and for that matter I can also imagine a world in which the corresponding argument about housing is right; whether either of them actually is right depends on details that needn't be the same in the two cases. So the implicit argument here (if I'm understanding right) -- "some people say that to solve our housing problems it isn't enough to build more houses, so we should prioritize building affordable housing or something instead; here's an analogous thing people might say about food, which is obviously silly; likewise, saying these things about housing is silly, so the main thing we need to do is to build more houses regardless of exactly what they are" -- doesn't work for me. It's not obvious enough that the food version of the argument is wrong, nor is it obvious enough that if one is wrong then the other is too. (I do tend to agree that building more housing is much the most important thing to do to address the difficulties many many many people have in affording somewhere to live, so my unconvincedness here isn't the result of not liking the conclusion.) Comment by gjm on How I do research · 2019-11-20T17:42:34.116Z · score: 5 (5 votes) · LW · GW Boldface the first few words. Comment by gjm on How I do research · 2019-11-20T13:32:05.366Z · score: 4 (2 votes) · LW · GW For what it's worth, I'm with Said rather than Zack on this one. (It would make more sense if these initial letters were associated with a mnemonic or something; then there would be a reason for emphasizing a bunch of first letters. But it seems to have been done just for, I dunno, fun.) Comment by gjm on Goal-thinking vs desire-thinking · 2019-11-17T20:27:30.415Z · score: 2 (1 votes) · LW · GW I don't know for sure whether we're really disagreeing. Perhaps that's a question with no definite answer; the question's about where best to draw the boundary of an only-vaguely-defined term. But it seems like you're saying "goal-thinking must only be concerned with goals that don't involve people's happiness" and I'm saying I think that's a mistake and that the fundamental distinction is between doing something as part of a happiness-maximizing process and recognizing the layer of indirection in that and aiming at goals we can see other reasons for, which may or may not happen to involve our or someone else's happiness. Obviously you can choose to focus only on goals that don't involve happiness in any way at all, and maybe doing so makes some of the issues clearer. But I don't think "involving happiness" / "not involving happiness" is the most fundamental criterion here; the distinction is actually, as your original terminology makes clear, between different modes of thinking. Comment by gjm on Goal-thinking vs desire-thinking · 2019-11-16T22:57:52.438Z · score: 2 (1 votes) · LW · GW I see things slightly differently. Happiness, suffering, etc., function as internal estimators of goal-met-ness. Like a variable in a computer program that indicates how you're doing. Hence, trying to optimize happiness directly runs the risk of finding ways to change the value of the variable without the corresponding real-world things the variable is trying to track. So far, so good. But! That doesn't mean that happiness can't also be a thing we care about. If I can arrange for someone's goals to be 50% met and for them to feel either as if they're 40% met or as if they're 60% met, I probably choose the latter; people like feeling as if their goals are met, and I insist that it's perfectly reasonable for me to care about that as well as about their actual goals. For that matter, if someone has goals I find terrible, I may actually prefer their goals to go unmet but for them still to be happy. I apply the same to myself -- within reason, I would prefer my happiness to overestimate rather than underestimate how well my goals are being met -- but obviously treating happiness as a goal is more dangerous there because the risk of getting seriously decoupled from my goals is greater. (I think.) I don't think it's necessary to see nonexistence as neutral in order to prefer (in some cases, perhaps only very extreme ones) nonexistence to existence-with-great-suffering. Suffering is unpleasant. People hate it and strive to avoid it. Yes, the underlying reason for that is because this helps them achieve other goals, but I am not obliged to care only about the underlying reason. (Just as I'm not obliged to regard sex as existing only for the sake of procreation.) Comment by gjm on [deleted post] 2019-11-15T17:13:27.392Z Nitpick: near the end you have written where you mean . Less-superficial observation about your argument at that point: What you quote is not, strictly, a contradiction unless you take "you cannot decide for the lives of others" more literally than anyone could reasonably think the person who said that meant it. Suppose they had said instead something like this: "In general you cannot decide for the lives of others, but when people commit major crimes they forfeit the right to decide the course of their lives thereafter, and sometimes there's no way to avoid someone's life being governed by someone else, and the best you can do is to try to minimize the harm. If someone commits murder, then pretty much everyone agrees that that gives us the right to use considerable force and coercion to stop them doing it again; I think we should do it by killing them rather than by locking them up for decades. If someone wants to have an abortion, then either they decide for the life of their unborn child or we decide for their life by stopping them; I think the latter is the lesser evil." I think it's clear that (1) there is no contradiction in that, and that (2) at least some people saying what you quoted a politician as saying actually mean something like that, and could be induced to explain themselves more thoroughly by suitable questioning. (I am not endorsing that position to any greater degree than saying it isn't outright contradictory; in particular, it is not my own position.) Comment by gjm on Goal-thinking vs desire-thinking · 2019-11-11T21:20:36.809Z · score: 4 (2 votes) · LW · GW I'm not convinced. To me, at least, my goals that are about me don't feel particularly different in kind from my goals that are about other people, nor do my goals that are about experiences feel particularly different from my goals that are about things other than experiences. (It's certainly possible to draw your dividing line between, say, "what you want for yourself" and "what other things you want", but I think that's an entirely different line from the one drawn in the OP.) Comment by gjm on Goal-thinking vs desire-thinking · 2019-11-11T00:35:26.371Z · score: 2 (1 votes) · LW · GW Is that really true? If you can have "have other people not suffer horribly" as a goal, you can have "not suffer horribly yourself" as a goal too. And if, on balance, your life seems likely to involve a lot of horrible suffering, then suicide might absolutely make sense even though it would reduce your ability to achieve your other goals. Comment by gjm on Goal-thinking vs desire-thinking · 2019-11-10T14:58:30.573Z · score: 10 (5 votes) · LW · GW I bite the bullet: I aim to use only goal-based thinking. (I dare say I don't completely succeed.) I may have goals like "enjoy eating a tasty meal" or "stop feeling hungry" but those are still goals rather than what you're calling desires. I don't think the two examples in your final paragraph are isomorphic, and I think they can be seen to be non-isomorphic in purely goal-based terms. • All else being equal, I prefer people to live rather than die, and I prefer that my preferences be satisfied. Taking a murder-pill would mean that more people die (at my hand, even) or that my preferences go unsatisfied, or both. So (all else being equal) I don't want to take the murder-pill. • All else being equal, I prefer to eat things that I like and not things that I don't like. I (hypothetically) don't like spinach right now, so I don't eat spinach. But if I suddenly started liking spinach, I would become able to eat spinach and thereby eat things I like rather than things I don't. So I would expect to have more of my preferences satisfied if I started liking spinach. So (all else being equal) I do want to start liking spinach. All of this is a matter of goals rather than (in your sense) desires. I want people to live, I want to have my preferences satisfied, I want to eat things I like, I want not to eat things I dislike. "But", I hear you cry, "you could equally well say in the first place 'I prefer to live according to my moral principles, and at present those principles include not murdering people, but if I took the pill those preferences would change.'. And you could equally well say in the second place 'I prefer not to eat spinach, and if I started liking spinach then I'd start doing that thing I prefer not to.'. And then you'd get the opposite conclusions." But no, I could not equally well say those things: saying those things would give a wrong account of my preferences. Some of my preferences (e.g., more people living and fewer dying) are about the external world. Some (e.g., having enjoyable eating-experiences) are about my internal state. Some are a mixture of both. You can't just swap one for the other. (There's a further complication, which is that -- so it seems to me, and I know I'm not alone -- moral values are not the same thing as preferences, even though they have a lot in common. I not only prefer people to live rather than die, I find it morally better that people live rather than die, and those are different mental phenomena.) Comment by gjm on Normative reductionism · 2019-11-06T19:42:37.896Z · score: 2 (1 votes) · LW · GW I think you are assuming that "utility" means something like "happiness". That is not the only possible way to use the word. If there is a term in my utility function (to whatever extent I have a utility function) for accurate knowledge, then there can be situations indistinguishable to me to which I assign different utility, because I may be unable to tell whether some bit of my "knowledge" is actually accurate or not. I think maybe you think there is something impossible or incoherent about this, perhaps on the grounds that it's absurd to say you care about the difference between X and Y when you cannot actually discern the difference between X and Y. I disagree. If you tell me that you are either going to shoot me in the head or shoot me in the head and then murder a million other people, I prefer the former even though, being dead, I will be unable to tell whether you've murdered the million others or not. If you tell me that you will either slap me in the face and then shoot me dead, or else shoot me dead and then murder a million others, and if I believe you, then I will gladly take that slap in the face. If you tell me that you will either slap me in the face, convince me that you aren't going to murder anyone else, kill me, and then murder a million others, or else just kill me and the million others, I will not take the slap in the face even if I am confident that you could convince me. (Er, unless I think that the time you take convincing me makes it more likely that somehow you never actually get to murder me.) My utility function (to whatever extent I have a utility function) maps world-states to utilities, not my-experience-states to utilities. There is of course another function that maps my-experience states to utilities, or maybe to something like probability distributions over utilities (it goes: experience-state -> my beliefs about the state of the world -> my estimate of my utility function), but it isn't the same function and it isn't what I care about even if in some sense it's necessarily what I act on: if you propose to change the world-state and the experience-state in ways that don't match, then my preferences track what you propose to do to the world-state, not the experience-state. (Of course my experiences are among the things I care about, and I care about some of them a lot. If you threaten to make me wrongly think you have murdered my family then that's a very negative outcome for me and I will try hard to prevent it. But if I have to choose between that and having my family actually murdered, I pick the former.) Comment by gjm on Halloween · 2019-11-01T14:44:33.629Z · score: 2 (1 votes) · LW · GW I don't know whether Halloween really does help people come to terms with death, or anything like that, by coupling it with absurdity, but I don't think "we don't do standup comedy at funerals, so we don't really believe in meeting death with absurdity" is an argument that holds much water. Standup comedy at funerals runs the risk of offending (perhaps very severely) individuals associated with the deceased party. (And in some parts of our culture there is something not altogether unlike that: consider the old joke that the difference between an Australian wedding and an Australian funeral is that there's one drunk fewer at the funeral. Funerals and wakes can be pretty rowdy.) Comment by gjm on Proportional Giving · 2019-10-30T12:45:43.481Z · score: 2 (1 votes) · LW · GW Singer's proposal in that article isn't _quite_ that, though it may be that he just didn't think it through carefully enough (or deliberately simplified in an article intended for general consumption). He proposes that the fraction you give of your _total_ income should be, if you're in the top [10%, 1%, 0.1%, 0.01%], [10%, 15%, 25%, 33%], producing discontinuities at the boundaries of those groups. I suspect that if pressed on that point he'd be happy to go with something smoother. Comment by gjm on [deleted post] 2019-10-16T02:02:16.311Z In my opinion this makes your post valueless. (Not to say that you should explain what tools. But I think either saying nothing or being informative must be better than posting this as it is.) Comment by gjm on [deleted post] 2019-10-15T13:35:12.526Z There is no objective fact of the matter regarding moral standards. Rather, we want a moral system that can be widely adopted and that when widely adopted promotes things we find good. A moral system that said "you have to spend every waking moment curing malaria and feeding the hungry" would probably either just make people feel burned out and miserable or else be rejected outright. Many imaginable and prima facie plausible moral systems turn out to say that. A moral system that said "just do whatever the hell you want" would probably lead to few people bothering to cure malaria and feed the hungry. It seems plausible to me that a system that says "you should be making things better for others but it's fine to devote most of your time and energy and resources to your own welfare and that of your family" does, given human nature, actually roughly maximize net good done. I expect the optimum is more demanding than the average person's actual moral system, but probably not (much?) more demanding than the average effective altruist's. Comment by gjm on Open & Welcome Thread - October 2019 · 2019-10-13T22:04:26.265Z · score: 4 (2 votes) · LW · GW Immediately after the bit about monkeys there's this The usual goal in the typing monkeys thought experiment is the production of the complete works of Shakespeare. Having a spell checker and a grammar checker in the loop would drastically increase the odds. The analog of a type checker would go even further by making sure that, once Romeo is declared a human being, he doesn’t sprout leaves or trap photons in his powerful gravitational field. which feels like a bit of an own goal to me, because I suspect the analogue of a type checker would actually make sure that once Romeo is declared a Montague it's a type error for him to have any friendly interactions with a Capulet, thus preventing the entire plot of the play. Comment by gjm on Categories: models of models · 2019-10-10T03:28:22.783Z · score: 4 (2 votes) · LW · GW Let's take a somewhat-concrete example. Your post mentions birds. OK, so let's consider e.g. a model of birds flying in a flock, how they position themselves relative to one another, and so on. You suggest that we consider the birds as objects: so far, so good. And then you say "they do stuff like fly, tweet, lay eggs, eat, etc. I.e., verbs (morphisms)." For the purpose of a flocking model, the most relevant one of those is flying. How are you going to consider flying as a morphism in a category of birds? If A and B are birds, what is this morphism from A to B that represents flying? I'm not seeing how that could work. In the context of a flocking model, there are some things involving two birds. E.g., one bird might be following another, tending to fly toward it. Or it might be staying away from another, not getting too close. Obviously you can compose these relations if you want. (You can compose any relations whose types are compatible.) But it's not obvious to me that e.g. "following a bird that stays away from another bird" is actually a useful notion in modelling flocks of birds. It might turn out to be, but I would expect a number of other notions to be more useful: you might be interested in some sort of centre of mass of a whole flock, or the density of birds in the flock; you might want to consider something like a velocity field of which the individual birds' velocities are samples; etc. None of these things feel very categorical to me (though of course e.g. velocities live in a vector space and there is a category of vector spaces). Maybe flocking was a bad choice of example. Let's try another: let the birds be hens on a farm, kept for breeding and/or egg-laying. We might want to understand how much space to give them, what to feed them, when to collect their eggs, whether and when to kill them, and so on. Maybe we're interested in optimizing taste or profit or chicken-happiness or some combination of those. So, according to your original comment, the birds are again objects in a category, and now when they "lay eggs, etc., etc." these are morphisms. What morphisms? When a bird lays an egg, what are the two objects the morphism goes between? When are we going to compose these morphisms and what good will it do us? How does it actually help anything to consider birds as objects of a category? Here's the best I can do. We take the birds, and their eggs, and whatever else, as objects in a category, and we somehow cook up some morphisms relating them. The category will be bizarre and jury-rigged because none of the things we care about are really very categorical, but its structure will somehow correspond to some of the things about the birds that we care about. And then we make whatever sort of mathematical or computational model of the birds we would have made without category theory. So now instead of birds and eggs we have tuples (position, velocity, number of eggs sat on) or objects of C++ classes or something. Now since we've designed our mathematical model to match up, kinda, to what the birds actually do, maybe we can find a morphism between these two jury-rigged categories corresponding to "making a mathematical model of". And then maybe there's some category-theoretic thing we can do with this model and other mathematical models of birds, or something. But I gravely doubt that any of this will actually deliver any insight that we didn't ourselves put into it. I'd be intrigued to be proved wrong. Comment by gjm on Categories: models of models · 2019-10-10T03:06:36.842Z · score: 10 (6 votes) · LW · GW I'm really not convinced by this framing in terms of "objects doing things to other objects". Let's take a typical example of a morphism: let's say (note for non-mathematicians: that is, is a function that takes a positive integer and gives you a real number) given by . How is it helpful to think about this as doing something to ? How is it even slightly like "Alice pushes Bob"? You say "Every model is ultimately found in how one object changes another object" -- are you saying here that the integers change the real numbers? Or vice versa? (After that's done, what have the integers or the real numbers become?) The only thing here that looks to me like something changing something else is that (the morphism, not either of the objects) kinda-sorta "changes" an individual positive integer to which it's applied (an element of one of the objects, again not either of the objects) by replacing it with its square root. But even that much isn't true for many morphisms, because they aren't all functions and the objects of a category don't always have elements to "change". For instance, there's a category whose objects are the positive integers and which has a single morphism from to if and only if ; when we observe that , is 5 changing 9? or 9 changing 5? No, nothing is changing anything else here. So far as I can see, the only actual analogy here is with the bare syntactic structure: you can take "A pushes B" and "A has a morphism f to B" and match the pieces up. But the match isn't very good -- the second of those is a really unnatural way of writing it, and really you'd say "f is a morphism from A to B", and the things you can do with morphisms and the things you can do with sentences don't have much to do with one another. (You can say "A pushes B with a stick", and "A will push B", and so forth, and there are no obvious category-theoretic analogues of these; there's nothing grammatical that really corresponds to composition of morphisms; if A pushes B and B eats C, there really isn't any way other than that to describe the relationship between A and C, and indeed most of us wouldn't consider there to be any relationship worth mentioning between A and C in this situation.) Comment by gjm on What are your strategies for avoiding micro-mistakes? · 2019-10-06T20:23:12.962Z · score: 4 (2 votes) · LW · GW This also helps to train your intuition, in the cases where careful calculation reveals that in fact the intuitive answer was wrong. Comment by gjm on What is category theory? · 2019-10-06T15:00:22.902Z · score: 17 (6 votes) · LW · GW It seems a bit odd to offer lambda calculus as an example of how category theory is useful in computing, when lambda calculus predates category theory by about a decade (1932 to 1942). Comment by gjm on Open & Welcome Thread - October 2019 · 2019-10-06T09:57:11.984Z · score: 2 (1 votes) · LW · GW It's more usual for topology to motivate category theory than the other way around. (That's where category theory originally came from, historically.) Comment by gjm on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T19:33:42.476Z · score: 11 (4 votes) · LW · GW It seems extremely unfortunate that the terminology apparently shifted from "counterfactually valid" (which means the right thing) to "counterfactual" (which means almost the opposite of the right thing). Comment by gjm on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T19:26:17.293Z · score: 3 (2 votes) · LW · GW I would be interested to know how you see spite as "not necessarily negative". Comment by gjm on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-26T22:34:21.985Z · score: 5 (4 votes) · LW · GW I don't see the big shiny red button on the front page. If I visit LW in private mode, it's there. I have the map turned off. I haven't tried logging out or turning the map back on. I'm guessing that when Ben says it's "over the frontpage map" that means it's implemented in a way that makes it disappear if the map isn't there. That seems a bit odd, though it probably isn't worth the effort of fixing. (I have a launch code but hereby declare my intention not to use it. I am intrigued by the discussions of trading launch codes, or promises to use or not use them, for valuable things like effective charitable donations, but am not interested in taking either side of any such trade.) Comment by gjm on Open & Welcome Thread - September 2019 · 2019-09-08T15:17:18.058Z · score: 3 (2 votes) · LW · GW Aha, thanks. Sorry for being grumpy about it! (I hadn't known there was a profile setting to turn it off.) Comment by gjm on Open & Welcome Thread - September 2019 · 2019-09-07T19:54:24.904Z · score: 4 (2 votes) · LW · GW LW admins -- Is there a good reason why half my browser window when viewing the LW home page needs to be taken up with an enormous map? It's pretty horrible (and somehow pushes the same mental buttons as those whole-screen "why not sign up for our mailing list?" popups some sites give you, though obviously it's not actually very similar to those). I guess the idea is to encourage more people to go to meetups or something, but I promise it does not make me the least bit more inclined to do so. Comment by gjm on Predicted AI alignment event/meeting calendar · 2019-08-15T18:50:13.704Z · score: 5 (3 votes) · LW · GW Dunno. I don't think the way it is does any actual harm. Maybe something with "meetings" in it, as per Teerth Aloke's suggestion. Comment by gjm on Predicted AI alignment event/meeting calendar · 2019-08-15T00:26:16.274Z · score: 9 (5 votes) · LW · GW Somehow the word "predicted" in the title (as opposed to, say, "future" or "planned") led me to expect entries for things like "OpenAI releases explicit model of human utility function" and "Entire mass of planet earth converted to paperclips"... Comment by gjm on Rethinking Batch Normalization · 2019-08-03T13:33:53.992Z · score: 4 (3 votes) · LW · GW The Lipschitz constant of a function gives an indication of how horizontal it is rather than how locally linear it is. Naively I'd expect that the second of those things matters more than the first. Has anyone looked at what batch normalization does to that? More specifically: Define the 2-Lipschitz constant of function at to be something like and its overall 2-Lipschitz constant to be the sup of these. This measures how well is locally approximable by linear functions. (I expect someone's already defined a better version of this, probably with a different name, but I think this'll do.) Does batch normalization tend to reduce the 2-Lipschitz constant of the loss function? [EDITED to add:] I think having a 2-Lipschitz constant in this sense may be equivalent to having a derivative which is a Lipschitz function (and the constant may be its Lipschitz constant, or something like that). So maybe a simpler question is: For networks with activation functions making the loss function differentiable, does batchnorm tend to reduce the Lipschitz constant of its derivative? But given how well rectified linear units work, and that they have a non-differentiable activation function (which will surely make the loss functions fail to be 2-Lipschitz in the sense above) I'm now thinking that if anything like this works it will need to be more sophisticated... Comment by gjm on Why Subagents? · 2019-08-03T13:17:57.786Z · score: 2 (1 votes) · LW · GW Consider a pizza-eating agent with the following "grass is always greener on the other side of the fence" preference: it has no "initial" preference between toppings but as soon as it has one it realises it doesn't like it and then prefers all other not-yet-tried toppings to the one it's got (and to others it's tried). There aren't any preference cycles here -- if you give it mushroom it then prefers pepperoni, but having switched to pepperoni it then doesn't want to switch back to mushroom. If our agent has no opinion about comparisons between all toppings it's tried, and between all toppings it hasn't tried, then there are no outright inconsistencies either. Can you model this situation in terms of committees of subagents? Can you do it without requiring an unreasonably large number of subagents? Comment by gjm on Shortform Beta Launch · 2019-07-29T00:21:01.038Z · score: 2 (1 votes) · LW · GW The MVP described here doesn't seem functionally any different from an open thread. The future features clearly go beyond that, and the current MVP seems a reasonable stepping stone towards those. But ... is it worth considering just adding those features to comments generally, or comments in threads with some special flag set (which would then need to be set on the open threads), rather than introducing a whole new thing? (I'll hazard a guess that that's actually roughly how the current implementation works.) I'm thinking, e.g., that "convert a comment into a full post" might be something people sometimes want to do to comments anywhere, not just ones they called shortform posts. And that it's not entirely impossible that someone might want to be able to subscribe to a feed of all of some other user's comments, though that seems a bit extreme. Comment by gjm on How to take smart notes (Ahrens, 2017) · 2019-07-25T18:16:44.139Z · score: 11 (2 votes) · LW · GW Apparently "slip box" is roughly equivalent to "card index" and Luhmann's system is as follows: • Make notes on small cards / pieces of paper. • Don't attempt to categorize them with things like alphabetical order of subject or Dewey decimal notion. • Give them all unique identifiers, and allow these to have a "nested" structure when one note leads to others which lead to others. • Cross-link them by adding to each note references (via those unique IDs) to other notes that you know are related to it. Obviously something very similar could be done on a computer, with many practical advantages over the version made out of pieces of paper. I have a suspicion that Luhmann's alleged great productivity ("alleged" only because I haven't verified for myself) is best ascribed either (1) to things other than his use of a card-index system or (2) to idiosyncratic things about _how_ he used it that are not captured by what I wrote above or by the contents of the post here... Comment by gjm on How to take smart notes (Ahrens, 2017) · 2019-07-25T18:08:45.776Z · score: 11 (2 votes) · LW · GW Unless I am missing something, this post never actually says what a slip-box is or how to make and use one. What then am I supposed to do with the advice that "to start the habit of using slip-boxes" I should "start by making literature notes"? I can make some literature notes ... and what then? What do I do with them for them to be slip-box notes? It seems like the single most important piece of information here is being wilfully withheld... Comment by gjm on 87,000 Hours or: Thoughts on Home Ownership · 2019-07-07T02:35:26.345Z · score: 6 (3 votes) · LW · GW I'm sure you took into account things like house price appreciation. But what you said about investing the difference between what renters pay and what owners pay was misleading and wrong. You can do as careful and accurate and insightful a simulation as you please, but if what you say is "people pay more in mortgage + fees + taxes than in rent, which is bad because they could have invested the difference in the market and then they'd have got some actual returns" or "rent is bad because it's just throwing money away" then you are making a broken argument and I think you shouldn't do that. Not even if the conclusion of the broken argument happens to be the same as the conclusion of the careful accurate insightful simulation. The question isn't whether back-testing is hard, it's how well you did it and whether whatever assumptions you made seem reasonable to any given reader. Again, my complaint isn't that your final results are bad, it's that we have no way of telling whether your final results are good or bad because you didn't show us any of the information we'd need to decide. [EDITED to add:] This is all coming out more aggressive-sounding than I would like, and I hope I'm not giving the impression that I think the OP is terrible or that you're a bad person or any such thing. It just seems as if your responses to my comments aren't engaging with the actual points those comments are trying to make. That may be wholly or partly because of lack of clarity in what I've written, in which case please accept my apologies. Comment by gjm on 87,000 Hours or: Thoughts on Home Ownership · 2019-07-07T01:40:46.917Z · score: 8 (4 votes) · LW · GW Repayment versus interest etc.: Looking at total money in and total money out is fine so long as you do it carefully; in this case "carefully" means you can't just say "If that money were invested in the market it would be earning you a return" as if those mortgage payments aren't earning any return. I don't know how a typical homeowner's payments divide between repayment, interest, and other things, but it seems quite possible to me that more than the difference you're talking about is repayment, in which case it could even be that the higher returns you (might hope to) get from investing the money in index funds rather than housing are more than counterbalanced by the fact that more of the money is being invested. Of course the right way to answer this is to do the actual calculations, and you very reasonably suggest that your readers go and find a rent-versus-buy calculator and try to do them. Nothing wrong with that. But I think the way you describe the situation is no more accurate than the "if you rent you're throwing the money away" line one hears from people arguing on the opposite side. Stock market investors: If those numbers for hypothetical investors A,B,C come from a simulation you did then I think the article should say so, and should say something about what assumptions you made. As it stands, you're just asking us to take them on faith. I don't find them terribly implausible, and your simulations may be excellent, but we can't tell that. Anecdotal nonsense: Yup, the forced savings thing is a good point (and presumably not very applicable to anyone who's bothering to read a lengthy article about the financial merits of renting versus buying). My suspicion is that the "poorer people rent, richer people buy" dynamic is an even bigger reason why my observations aren't much evidence about what any given person here should do. But I don't think the counterfactual is relevant here, because the observation I was drawing attention to wasn't "buyers are doing OK" but "buyers are doing better than renters". Comment by gjm on Opting into Experimental LW Features · 2019-07-07T01:20:12.890Z · score: 2 (1 votes) · LW · GW I'm not feeling any glaring lacks. Of course it's possible that there are possible changes that once made would be obvious improvements :-). I do use the "recent discussion" section. I actually don't mind the collapsing there -- it's not trying to present the whole of any discussion, and clearly space is at a big premium there, so collapsing might not be a bad tradeoff. Comment by gjm on 87,000 Hours or: Thoughts on Home Ownership · 2019-07-06T21:33:06.363Z · score: 7 (4 votes) · LW · GW There is a lot of good sense in this article, but I have some problems with it. Perhaps the biggest: It appears to be written as if a house is only an investment; as if the buy-or-rent decision is made entirely on the basis of what will maximize your future overall wealth. I'm all in favour of maximizing wealth (all else being equal, at any rate) but a house is not only an investment; it is, as you point out, also a place where you are likely to be spending ~87k hours of your life, and you may reasonably choose to do something that makes you poorer overall if those 87k hours are substantially more pleasant for it. Those non-financial factors aren't all in favour of buying over renting. The article mentions one that goes the other way, though of course only in the context of what it means for your financial welfare: if you rent and don't own, then it's often easier to move. Here are some others: • Buy, because if you rent there are likely to be a ton of onerous restrictions on what you're allowed to do. No pets! No children! No replacing the crappy kitchen appliances! No changing the horrible carpets! • Buy, if the properties available to buy (where you are) are more to your taste than the ones available for rent. Or: Rent, if the properties available to rent are more to your taste than the ones available to buy. • Buy, because if you rent then your landlord can kick you out on a whim. (Exactly how readily they can do that varies from place to place, but they always have some ability to do it and it's a risk that basically doesn't exist if you buy.) • Buy, if you get a sense of satisfaction or security from owning the place you live in. Or: Rent, if you find that being responsible for the maintenance of the place feels oppressive. Some other quibbles. People spend on average 50% more money on mortgages, taxes, and fees than they spend on rent. If that money were invested in the market it would be earning you a return. Does that include morgage repayment as well as mortgage interest? Mortgage interest, taxes, and fees are straightforwardly "lost" in the same sort of way as rent is; mortgage repayment, though, is just buying a house slowly. That money is invested, in real estate rather than in equities, and it is earning you a return. (Possibly a smaller, more volatile, and/or less diversified return, for sure; I'm not disagreeing with that bit.) let's look at three examples of people trying to time the market. All three people invest at a rate of200/mo. [...]

OK, let's look. Where are the actual numbers? All I see is vague descriptions (e.g., A invests immediately before each crash, but how much do they invest on each occasion? An amount derived somehow from that $200/mo, but how? E.g., is the idea that the money sits there until a crash is about to happen and then however much is available gets invested, or what?) and final numbers with no explanation or justification or anything. For all I can tell, romeostevensit might just have made those numbers up. I bet he didn't, but without seeing the details no one can tell anything. Also:$200/mo 40 years ago is a very different figure from $200/mo now. Does anyone do anything at all like investing at a constant nominal rate over 40 years? It seems unlikely. If instead of "$200/month" you make it "whatever corresponds to \$200/month now, according to some standard measure of inflation" then the numbers will look quite different. (For the avoidance of doubt, I expect that A,B,C would still come out in the same order.)

[EDITED to add:] Anecdotally, the people I know who have bought houses generally seem to have done OK, largely independent of when they've bought; the people I know who have rented seem to have had no end of problems. But (1) this is a small and doubtless unrepresentative sample, and (2) I think richer people are more likely to buy and poorer people more likely to rent (especially as I live somewhere where buying is generally assumed to be something you want to do), so I don't think this is very strong evidence of anything.

Comment by gjm on Opting into Experimental LW Features · 2019-07-06T20:21:52.733Z · score: 2 (1 votes) · LW · GW

Sure. But the thing I was saying might be useful (which, I understand, has nothing to speak of in common with what's on offer right now) is auto-collapsing all comments I can be presumed to have read or decided not to bother reading on the grounds that they were already there the last time I visited the discussion. That would be useful even on posts with <=50 comments. (At least, it would be useful there if useful at all; it might be that I'm wrong in thinking it would be useful.)

Comment by gjm on Opting into Experimental LW Features · 2019-07-06T20:19:50.525Z · score: 3 (2 votes) · LW · GW

If someone's writing a whole post then for sure they should try to make its structure clear, perhaps with headings and tables of contents and introductory paragraphs and bullet points and whatnot.

I don't think that's usually appropriate for comments, which are usually rather short.

So, e.g., I don't think your comment to which I'm replying right now would have been improved by adding such signposts. But, even so, I don't see how I could tell whether I want to read the whole thing from knowing that it begins "I want to argue that this is a huge problem".

There might be benefit in providing some sort of guidance for readers of a whole comment thread. But it's hard to see how, especially as comment threads are dynamic: new material could appear anywhere at any time, and if order of presentation is partly determined by scores then that too can be rearranged pretty much arbitrarily. (And who'd do it?)

You might hope that a collapsed pile of comments is itself a sort of roadmap to the comments themselves, but I think that just doesn't work, just as you wouldn't get a useful summary of A Tale of Two Cities or A Brief History of Time by just taking the first half-sentence of each paragraph.